id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-7010
|
multiple_choice
|
What is an interesting example of a molecule with two central atoms, which are both c atoms?
|
[
"chloride",
"water",
"acetylene",
"sulfur"
] |
C
|
Relavent Documents:
Document 0:::
A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th
Document 1:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 2:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 3:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 4:::
Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2.
In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element.
Classifications
Based on atomicity, molecules can be classified as:
Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic.
Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic.
Triatomic (composed of three atoms). Examples include O3 (ozone).
Polyatomic (composed of three or more atoms). Examples include S8.
Atomicity may vary in different allotropes of the same element.
The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1.
The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2).
Examples
The most common values of atomicity for the first 30 elements in the periodic table are as follows:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is an interesting example of a molecule with two central atoms, which are both c atoms?
A. chloride
B. water
C. acetylene
D. sulfur
Answer:
|
|
sciq-10539
|
multiple_choice
|
What is the state of matter that resembles a gas, but is made of ions, giving it different properties than a typical gas?
|
[
"acid",
"gamma",
"plasma",
"vapor"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions.
Low-energy states of matter
Classical states
Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other.
Amorphous solid: A solid in which there is no far-range order of the positions of the atoms.
Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order.
Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom.
Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern.
Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure.
Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order.
Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container.
Modern states
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current.
The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign.
In conductors
In conducting media, particles serve to carry charge:
In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes.
In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.
In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.
In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the state of matter that resembles a gas, but is made of ions, giving it different properties than a typical gas?
A. acid
B. gamma
C. plasma
D. vapor
Answer:
|
|
sciq-4532
|
multiple_choice
|
Which particle of an atom has a positive electric charge?
|
[
"proton",
"electron",
"nutron",
"nucleus"
] |
A
|
Relavent Documents:
Document 0:::
In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, which are all believed to have the same charge (except antimatter). Another charged particle may be an atomic nucleus devoid of electrons, such as an alpha particle.
A plasma is a collection of charged particles, atomic nuclei and separated electrons, but can also be a gas containing a significant proportion of charged particles.
Charged particles are labeled as either positive (+) or negative (-). Only the existence of two "types" of charges are known, and the designations themselves are arbitrarily named. Nothing is inherent to a positively charged particle that makes it "positive", and the same goes for negatively charged particles.
Examples
Positively charged particles
protons and atomic nuclei
positrons (antielectrons)
alpha particles
positive charged pions
cations
Negatively charged particles
electrons
antiprotons
muons
tauons
negative charged pions
anions
Particles without an electric charge
neutrons
photons
neutrinos
neutral pions
z boson
higgs boson
atoms
Document 1:::
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry.
Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge.
Elementary definition
Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition.
A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th
Document 2:::
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 .
In the SI system of units, the value of the elementary charge is exactly defined as = coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one.
In the centimetre–gram–second system of units (CGS), the corresponding quantity is .
Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865.
As a unit
In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron.
In other natural unit systems, the unit of charge is defined as with the result that
where is the fine-structure constant, is the speed of light, is
Document 3:::
An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to
Document 4:::
The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 after proposing an atomic model, later called the plum pudding model, based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms.
Related problems include the study of the geometry of the minimum energy configuration and the study of the large behavior of the minimum energy.
Mathematical statement
The electrostatic interaction energy occurring between each pair of electrons of equal charges (, with the elementary charge of an electron) is given by Coulomb's law,
where is the electric constant and is the distance between each pair of electrons located at points on the sphere defined by vectors and , respectively.
Simplified units of and (the Coulomb constant) are used without loss of generality. Then,
The total electrostatic potential energy of each N-electron configuration may then be expressed as the sum of all pair-wise interaction energies
The global minimization of over all possible configurations of N distinct points is typically found by numerical minimization algorithms.
Thomson's problem is related to the 7th of the eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere".
The main difference is that in Smale's problem the function to minimise is not the electrostatic potential but a logarithmic potential given by A second difference is that Smale's question is about the asymptotic behaviour of the total potential when the number N of points goes to infinity, not for concrete values of N.
Example
The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, , or
K
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which particle of an atom has a positive electric charge?
A. proton
B. electron
C. nutron
D. nucleus
Answer:
|
|
sciq-2181
|
multiple_choice
|
What is the name of the wind belt nearest the equator?
|
[
"tropical gusts",
"cyclones",
"doldrums",
"trade winds"
] |
D
|
Relavent Documents:
Document 0:::
Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well.
The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance.
Atmospheric motions
Coriolis force
When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as:
where
is the flow velocity
is the planet's angular velocity vector
This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotat
Document 1:::
In fluid dynamics, a secondary circulation or secondary flow is a weak circulation that plays a key maintenance role in sustaining a stronger primary circulation that contains most of the kinetic energy and momentum of a flow. For example, a tropical cyclone's primary winds are tangential (horizontally swirling), but its evolution and maintenance against friction involves an in-up-out secondary circulation flow that is also important to its clouds and rain. On a planetary scale, Earth's winds are mostly east–west or zonal, but that flow is maintained against friction by the Coriolis force acting on a small north–south or meridional secondary circulation.
See also
Hough function
Primitive equations
Secondary flow
Document 2:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 3:::
Zonal and meridional flow are directions and regions of fluid flow on a globe.
Zonal flow follows a pattern along latitudinal lines, latitudinal circles or in the west–east direction.
Meridional flow follows a pattern from north to south, or from south to north, along the Earth's longitude lines, longitudinal circles (meridian) or in the north–south direction.
These terms are often used in the atmospheric and earth sciences to describe global phenomena, such as "meridional wind", or "zonal average temperature".
In the context of physics, zonal flow connotes a tendency of flux to conform to a pattern parallel to the equator of a sphere. In meteorological term regarding atmospheric circulation, zonal flow brings a temperature contrast along the Earth's longitude. Extratropical cyclones in zonal flows tend to be weaker, moving faster and producing relatively little impact on local weather.
Extratropical cyclones in meridional flows tend to be stronger and move slower. This pattern is responsible for most instances of extreme weather, as not only are storms stronger in this type of flow regime, but temperatures can reach extremes as well, producing heat waves and cold waves depending on the equator-ward or poleward direction of the flow.
For vector fields (such as wind velocity), the zonal component (or x-coordinate) is denoted as u, while the meridional component (or y-coordinate) is denoted as v.
In plasma physics Zonal flow (plasma) means poloidal, which is the opposite from the meaning in planetary atmospheres and weather/climate studies.
See also
Zonal and poloidal
Zonal flow (plasma)
Meridione
Notes
Orientation (geometry)
Document 4:::
The log wind profile is a semi-empirical relationship commonly used to describe the vertical distribution of horizontal mean wind speeds within the lowest portion of the planetary boundary layer. The relationship is well described in the literature.
The logarithmic profile of wind speeds is generally limited to the lowest 100 m of the atmosphere (i.e., the surface layer of the atmospheric boundary layer). The rest of the atmosphere is composed of the remaining part of the planetary boundary layer (up to around 1000 m) and the troposphere or free atmosphere. In the free atmosphere, geostrophic wind relationships should be used.
Definition
The equation to estimate the mean wind speed () at height (meters) above the ground is:
where is the friction velocity (m s−1), is the Von Kármán constant (~0.41), is the zero plane displacement (in metres), is the surface roughness (in meters), and is a stability term where is the Obukhov length from Monin-Obukhov similarity theory. Under neutral stability conditions, and drops out and the equation is simplified to,
Zero-plane displacement () is the height in meters above the ground at which zero mean wind speed is achieved as a result of flow obstacles such as trees or buildings. This displacement can be approximated as 2/3 to 3/4 of the average height of the obstacles. For example, if estimating winds over a forest canopy of height 30 m, the zero-plane displacement could be estimated as d = 20 m.
Roughness length () is a corrective measure to account for the effect of the roughness of a surface on wind flow. That is, the value of the roughness length depends on the terrain. The exact value is subjective and references indicate a range of values, making it difficult to give definitive values. In most cases, references present a tabular format with the value of given for certain terrain descriptions. For example, for very flat terrain (snow, desert) the roughness length may be in the range 0.001 to 0.005 m. Si
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of the wind belt nearest the equator?
A. tropical gusts
B. cyclones
C. doldrums
D. trade winds
Answer:
|
|
sciq-2059
|
multiple_choice
|
Pyramids of net production and biomass reflect what level of efficiency?
|
[
"high",
"medium",
"low",
"extreme"
] |
C
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 2:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools.
ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Pyramids of net production and biomass reflect what level of efficiency?
A. high
B. medium
C. low
D. extreme
Answer:
|
|
sciq-7579
|
multiple_choice
|
Some compounds containing hydrogen are members of an important class of substances known as what?
|
[
"acids",
"bases",
"proteins",
"ions"
] |
A
|
Relavent Documents:
Document 0:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 1:::
In chemistry, a dihydrogen bond is a kind of hydrogen bond, an interaction between a metal hydride bond and an OH or NH group or other proton donor. With a van der Waals radius of 1.2 Å, hydrogen atoms do not usually approach other hydrogen atoms closer than 2.4 Å. Close approaches near 1.8 Å, are, however, characteristic of dihydrogen bonding.
Boron hydrides
An early example of this phenomenon is credited to Brown and Heseltine. They observed intense absorptions in the IR bands at 3300 and 3210 cm−1 for a solution of (CH3)2NHBH3. The higher energy band is assigned to a normal N−H vibration whereas the lower energy band is assigned to the same bond, which is interacting with the B−H. Upon dilution of the solution, the 3300 cm−1 band increased in intensity and the 3210 cm−1 band decreased, indicative of intermolecular association.
Interest in dihydrogen bonding was reignited upon the crystallographic characterization of the molecule H3NBH3. In this molecule, like the one studied by Brown and Hazeltine, the hydrogen atoms on nitrogen have a partial positive charge, denoted Hδ+, and the hydrogen atoms on boron have a partial negative charge, often denoted Hδ−. In other words, the amine is a protic acid and the borane end is hydridic. The resulting B−H...H−N attractions stabilize the molecule as a solid. In contrast, the related substance ethane, H3CCH3, is a gas with a boiling point 285 °C lower. Because two hydrogen centers are involved, the interaction is termed a dihydrogen bond. Formation of a dihydrogen bond is assumed to precede formation of H2 from the reaction of a hydride and a protic acid. A very short dihydrogen bond is observed in NaBH4·2H2O with H−H contacts of 1.79, 1.86, and 1.94 Å.
Coordination chemistry
Protonation of transition metal hydride complexes is generally thought to occur via dihydrogen bonding. This kind of H−H interaction is distinct from the H−H bonding interaction in transition metal complexes having dihydrogen bound to a meta
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 4:::
This is a list of homological algebra topics, by Wikipedia page.
Basic techniques
Cokernel
Exact sequence
Chain complex
Differential module
Five lemma
Short five lemma
Snake lemma
Nine lemma
Extension (algebra)
Central extension
Splitting lemma
Projective module
Injective module
Projective resolution
Injective resolution
Koszul complex
Exact functor
Derived functor
Ext functor
Tor functor
Filtration (abstract algebra)
Spectral sequence
Abelian category
Triangulated category
Derived category
Applications
Group cohomology
Galois cohomology
Lie algebra cohomology
Sheaf cohomology
Whitehead problem
Homological conjectures in commutative algebra
Homological algebra
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Some compounds containing hydrogen are members of an important class of substances known as what?
A. acids
B. bases
C. proteins
D. ions
Answer:
|
|
sciq-6016
|
multiple_choice
|
The distance between two consecutive z discs or z lines is called what?
|
[
"contractile",
"ligule",
"radius",
"sarcomere"
] |
D
|
Relavent Documents:
Document 0:::
In geometry, a solid of revolution is a solid figure obtained by rotating a plane figure around some straight line (the axis of revolution), which may not intersect the generatrix (except at its boundary). The surface created by this revolution and which bounds the solid is the surface of revolution.
Assuming that the curve does not cross the axis, the solid's volume is equal to the length of the circle described by the figure's centroid multiplied by the figure's area (Pappus's second centroid theorem).
A representative disc is a three-dimensional volume element of a solid of revolution. The element is created by rotating a line segment (of length ) around some axis (located units away), so that a cylindrical volume of units is enclosed.
Finding the volume
Two common methods for finding the volume of a solid of revolution are the disc method and the shell method of integration. To apply these methods, it is easiest to draw the graph in question; identify the area that is to be revolved about the axis of revolution; determine the volume of either a disc-shaped slice of the solid, with thickness , or a cylindrical shell of width ; and then find the limiting sum of these volumes as approaches 0, a value which may be found by evaluating a suitable integral. A more rigorous justification can be given by attempting to evaluate a triple integral in cylindrical coordinates with two different orders of integration.
Disc method
The disc method is used when the slice that was drawn is perpendicular to the axis of revolution; i.e. when integrating parallel to the axis of revolution.
The volume of the solid formed by rotating the area between the curves of and and the lines and about the -axis is given by
If (e.g. revolving an area between the curve and the -axis), this reduces to:
The method can be visualized by considering a thin horizontal rectangle at between on top and on the bottom, and revolving it about the -axis; it forms a ring (or disc in the case
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
pen.color('blue')
pen.pensize(0.5)
for theta in range(361):
k = theta * d * math.pi / 180
r = 300 * math.sin(n * k)
x = r * math.cos(k)
y = r * math.sin(k)
pen.goto(x, y)
pen.color('red')
pen.pensize(4)
for theta in range(361):
k = theta * math.pi / 180
r = 300 * math.sin(n * k)
Document 3:::
This property can be used to offset a Z-value, for example in two dimensions the coordinates to the top (decreasing y), bottom (increasi
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The distance between two consecutive z discs or z lines is called what?
A. contractile
B. ligule
C. radius
D. sarcomere
Answer:
|
|
sciq-1420
|
multiple_choice
|
What move the body by contracting against the skeleton?
|
[
"muscles",
"nerves",
"hormones",
"tissues"
] |
A
|
Relavent Documents:
Document 0:::
Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance.
See also
Myotomy
Oral myology
Document 1:::
Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques.
Basics
Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy.
The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor
Document 2:::
Neuro Biomechanics is based upon the research of bioengineering researchers, neuro-surgery, orthopedic surgery and biomechanists. Neuro Biomechanics are utilized by neurosurgeons, orthopedic surgeons and primarily by integrated physical medicine practitioners. Practitioners are focused on aiding people in the restoration of biomechanics of the skeletal system in order to measurably improve nervous system function, health, function, quality of life, reduce pain and the progression of degenerative joint and disc disease.
Neuro: of or having to do with the nervous system. Nervous system: An organ system that coordinates the activities of muscles, monitors organs, constructs and processes data received from the senses and initiates actions. The human nervous system coordinates the functions of itself and all organ systems including but not limited to the cardiovascular system, respiratory system, skin, digestive system, immune system, hormonal, metabolic, musculoskeletal, endocrine system, blood and reproductive system. Optimal function of the organism as a whole depends upon the proper function of the nervous system.
Biomechanics: (biology, physics) The branch of biophysics that deals with the mechanics of the human or animal body; especially concerned with muscles and the skeleton. The study of biomechanical influences upon nervous system function and load bearing joints.
Research:
Research on established ideal mechanical models for the human locomotor system.
Panjabi MM, Journal of Biomechanics, 1974. A note on defining body parts configurations
Gracovetsky S. Spine 1986; The Optimum Spine
Yoganandan, Spine 1996
Harrison. Spine 2004 Modeling of the Sagittal Cervical Spine as a Method to Discriminate Hypolordosis: Results of Elliptical and Circular Modeling in 72 Asymptomatic Subjects, 52 Acute Neck Pain Subjects, and 70 Chronic Neck Pain Subjects; Spine 2004
Panjabi et al. Spine 1997 Whiplash produces and S-Shape curve...
Harrision DE, JMPT 2003, Increasing
Document 3:::
Proprioception ( ), also called kinaesthesia (or kinesthesia), is the sense of self-movement, force, and body position.
Proprioception is mediated by proprioceptors, mechanosensory neurons located within muscles, tendons, and joints. Most animals possess multiple subtypes of proprioceptors, which detect distinct kinematic parameters, such as joint position, movement, and load. Although all mobile animals possess proprioceptors, the structure of the sensory organs can vary across species.
Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration. In many animals, sensory feedback from proprioceptors is essential for stabilizing body posture and coordinating body movement.
System overview
In vertebrates, limb movement and velocity (muscle length and the rate of change) are encoded by one group of sensory neurons (type Ia sensory fiber) and another type encode static muscle length (group II neurons). These two types of sensory neurons compose muscle spindles. There is a similar division of encoding in invertebrates; different subgroups of neurons of the Chordotonal organ encode limb position and velocity.
To determine the load on a limb, vertebrates use sensory neurons in the Golgi tendon organs: type Ib afferents. These proprioceptors are activated at given muscle forces, which indicate the resistance that muscle is experiencing. Similarly, invertebrates have a mechanism to determine limb load: the Campaniform sensilla. These proprioceptors are active when a limb experiences resistance.
A third role for proprioceptors is to determine when a joint is at a specific position. In vertebrates, this is accomplished by Ruffini endings and Pacinian corpuscles. These proprioceptors are activated when the joint is at a threshold position, usually at the extre
Document 4:::
The American Society of Biomechanics (ASB) is a scholarly society that focuses on biomechanics across a variety of academic fields. It was founded in 1977 by a group of scientists and clinicians. The ASB holds an annual conference as an arena to disseminate and learn about the most recent progress in the field, to distribute awards to recognize excellent work, and to engage in public outreach to expand the impact of its members.
Conferences
The society hosts an annual conference that takes place in North America (usually USA). These conferences are periodically joint conferences held in conjunction with the International Society of Biomechanics (ISB), the North American Congress on Biomechanics (NACOB), and the World Congress of Biomechanics (WCB). The annual conference, when not partnered with another conference, receives around 700 to 800 abstract submissions per year, with attendees in approximately the same numbers. The first conference was held in 1977.
Often, work presented at these conferences achieves media attention due to the ‘public interest’ nature of the findings or that new devices are introduced there. Examples include:
the effect of tablet reading on cervical spine posture;
the squeak of the basketball shoe;
‘underwear’ to address back-pain;
recovery after exercise;
exoskeleton boots for joint pain during exercise;
how flamingos stand on one leg.
National Biomechanics Day
The ASB is instrumental in promoting National Biomechanics Day (NBD), which has received international recognition.
In New Zealand, Massey University attracted NZ$48,000 of national funding
through the Unlocking Curious Minds programme to promote National Biomechanics Day, with the aim to engage 1,100 students from lower-decile schools in an experiential learning day focused on the science of biomechanics.
It was first held in 2016 on April 7, and consisted of ‘open house’ visits from middle and high school students to biomechanics research and teaching laboratories a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What move the body by contracting against the skeleton?
A. muscles
B. nerves
C. hormones
D. tissues
Answer:
|
|
ai2_arc-301
|
multiple_choice
|
A ball is dropped from different heights. When the ball is dropped from the highest height, it makes the greatest noise or vibration when it lands on the ground. What is the best explanation for the ball making the greatest noise?
|
[
"The air pushes down more and the ball goes faster.",
"Gravity pulls for a longer time and the ball goes faster.",
"The ball is gaining weight and going faster.",
"The ball is warming up and going faster."
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
To hear the shape of a drum is to infer information about the shape of the drumhead from the sound it makes, i.e., from the list of overtones, via the use of mathematical theory.
"Can One Hear the Shape of a Drum?" is the title of a 1966 article by Mark Kac in the American Mathematical Monthly which made the question famous, though this particular phrasing originates with Lipman Bers. Similar questions can be traced back all the way to physicist Arthur Schuster in 1882. For his paper, Kac was given the Lester R. Ford Award in 1967 and the Chauvenet Prize in 1968.
The frequencies at which a drumhead can vibrate depend on its shape. The Helmholtz equation calculates the frequencies if the shape is known. These frequencies are the eigenvalues of the Laplacian in the space. A central question is whether the shape can be predicted if the frequencies are known; for example, whether a Reuleaux triangle can be recognized in this way. Kac admitted that he did not know whether it was possible for two different shapes to yield the same set of frequencies. The question of whether the frequencies determine the shape was finally answered in the negative in the early 1990s by Gordon, Webb and Wolpert.
Formal statement
More formally, the drum is conceived as an elastic membrane whose boundary is clamped. It is represented as a domain D in the plane. Denote by λn the Dirichlet eigenvalues for D: that is, the eigenvalues of the Dirichlet problem for the Laplacian:
Two domains are said to be isospectral (or homophonic) if they have the same eigenvalues. The term "homophonic" is justified because the Dirichlet eigenvalues are precisely the fundamental tones that the drum is capable of producing: they appear naturally as Fourier coefficients in the solution wave equation with clamped boundary.
Therefore, the question may be reformulated as: what can be inferred on D if one knows only the values of λn? Or, more specifically: are there two distinct domains that are isospectral?
Rel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A ball is dropped from different heights. When the ball is dropped from the highest height, it makes the greatest noise or vibration when it lands on the ground. What is the best explanation for the ball making the greatest noise?
A. The air pushes down more and the ball goes faster.
B. Gravity pulls for a longer time and the ball goes faster.
C. The ball is gaining weight and going faster.
D. The ball is warming up and going faster.
Answer:
|
|
sciq-2839
|
multiple_choice
|
A lymphocyte is the type of which cell involved in an immune system response?
|
[
"white brain cell",
"red blood cell",
"white blood cell",
"white immunity cell"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of Immune cells, also known as white blood cells, white cells, leukocytes, or leucocytes. They are cells involved in protecting the body against both infectious disease and foreign invaders.
Document 1:::
A lymphocyte is a type of white blood cell (leukocyte) in the immune system of most vertebrates. Lymphocytes include T cells (for cell-mediated, cytotoxic adaptive immunity), B cells (for humoral, antibody-driven adaptive immunity), and Innate lymphoid cells (ILCs) ("innate T cell-like" cells involved in mucosal immunity and homeostasis), of which natural killer cells are an important subtype (which functions in cell-mediated, cytotoxic innate immunity). They are the main type of cell found in lymph, which prompted the name "lymphocyte" (with cyte meaning cell). Lymphocytes make up between 18% and 42% of circulating white blood cells.
Types
The three major types of lymphocyte are T cells, B cells and natural killer (NK) cells. Lymphocytes can be identified by their large nucleus.
T cells and B cells
T cells (thymus cells) and B cells (bone marrow- or bursa-derived cells) are the major cellular components of the adaptive immune response. T cells are involved in cell-mediated immunity, whereas B cells are primarily responsible for humoral immunity (relating to antibodies). The function of T cells and B cells is to recognize specific "non-self" antigens, during a process known as antigen presentation. Once they have identified an invader, the cells generate specific responses that are tailored maximally to eliminate specific pathogens or pathogen-infected cells. B cells respond to pathogens by producing large quantities of antibodies which then neutralize foreign objects like bacteria and viruses. In response to pathogens some T cells, called T helper cells, produce cytokines that direct the immune response, while other T cells, called cytotoxic T cells, produce toxic granules that contain powerful enzymes which induce the death of pathogen-infected cells. Following activation, B cells and T cells leave a lasting legacy of the antigens they have encountered, in the form of memory cells. Throughout the lifetime of an animal, these memory cells will "remember" each s
Document 2:::
White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood
Document 3:::
T cells are one of the important types of white blood cells of the immune system and play a central role in the adaptive immune response. T cells can be distinguished from other lymphocytes by the presence of a T-cell receptor (TCR) on their cell surface.
T cells are born from hematopoietic stem cells, found in the bone marrow. Developing T cells then migrate to the thymus gland to develop (or mature). T cells derive their name from the thymus. After migration to the thymus, the precursor cells mature into several distinct types of T cells. T cell differentiation also continues after they have left the thymus. Groups of specific, differentiated T cell subtypes have a variety of important functions in controlling and shaping the immune response.
One of these functions is immune-mediated cell death, and it is carried out by two major subtypes: CD8+ "killer" (cytotoxic) and CD4+ "helper" T cells. (These are named for the presence of the cell surface proteins CD8 or CD4.) CD8+ T cells, also known as "killer T cells", are cytotoxic – this means that they are able to directly kill virus-infected cells, as well as cancer cells. CD8+ T cells are also able to use small signalling proteins, known as cytokines, to recruit other types of cells when mounting an immune response. A different population of T cells, the CD4+ T cells, function as "helper cells". Unlike CD8+ killer T cells, the CD4+ helper T (TH) cells function by further activating memory B cells and cytotoxic T cells, which leads to a larger immune response. The specific adaptive immune response regulated by the TH cell depends on its subtype (such as T-helper1, T-helper2, T-helper17, regulatory T-cell), which is distinguished by the types of cytokines they secrete.
Regulatory T cells are yet another distinct population of T cells that provide the critical mechanism of tolerance, whereby immune cells are able to distinguish invading cells from "self". This prevents immune cells from inappropriately reacting again
Document 4:::
The pluripotency of biological compounds describes the ability of certain substances to produce several distinct biological responses. Pluripotent is also described as something that has no fixed developmental potential, as in being able to differentiate into different cell types in the case of pluripotent stem cells.
One type of pluripotent cell, called a hematopoietic stem cell, can differentiate into a large variety of cells with different functions. This stem cell can produce red blood cells, platelets, mast cells, dendritic cells, macrophages, lymphocytes, neutrophils, basophils, and eosinophils. Each of these cells have a different function, but they all work together as part of the immune system.
Monocytes can differentiate into either dendritic cells or macrophages. Macrophages are covered with chemical receptors and phagocytose foreign particles, but are specific about what immune responses to be involved in. Dendritic cells phagocytose invaders; then they present the antigen on their surface to stimulate the acquired immune system (lymphocytes) as backup.
Another example are lymphocytes called naïve T-helper cells. These cells can differentiate into many subtypes once activated by antigen presenting cells (APCs) like dendrites. They divide into memory cells, TH1, TH17, and TH2 cells, to name a few. Memory cells are made solely for the purpose of having a template to use in the case of reinfection so the body has a jump start instead of starting over as if never infected. TH17 cells do a variety of tasks including recruiting neutrophils, creating defensins, and mediating inflammation in the intestinal epithelium and skin. TH2 cells produce cytokines that will trigger certain B cells. B cells can differentiate into memory cells or plasma cells. The B plasma cells produce the antibodies that are used to tag invading cells so they can be attacked, among other functions. TH1 cells are created to make cytokines, like interferon gamma, that activate macrophage
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A lymphocyte is the type of which cell involved in an immune system response?
A. white brain cell
B. red blood cell
C. white blood cell
D. white immunity cell
Answer:
|
|
sciq-8415
|
multiple_choice
|
Hydrocarbons in which all carbons are connected by single bonds are called?
|
[
"acids",
"enzymes",
"lipids",
"alkanes"
] |
D
|
Relavent Documents:
Document 0:::
A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th
Document 1:::
Bisnorhopanes (BNH) are a group of demethylated hopanes found in oil shales across the globe and can be used for understanding depositional conditions of the source rock. The most common member, 28,30-bisnorhopane, can be found in high concentrations in petroleum source rocks, most notably the Monterey Shale, as well as in oil and tar samples. 28,30-Bisnorhopane was first identified in samples from the Monterey Shale Formation in 1985. It occurs in abundance throughout the formation and appears in stratigraphically analogous locations along the California coast. Since its identification and analysis, 28,30-bisnorhopane has been discovered in oil shales around the globe, including lacustrine and offshore deposits of Brazil, silicified shales of the Eocene in Gabon, the Kimmeridge Clay Formation in the North Sea, and in Western Australian oil shales.
Chemistry
28,30-bisnorhopane exists in three epimers: 17α,18α21β(H), 17β,18α,21α(H), and 17β,18α,21β(H). During GC-MS, the three epimers coelute at the same time and are nearly indistinguishable. However, mass spectral fragmentation of the 28,30-bisnorhopane is predominantly characterized by m/z 191, 177, and 163. The ratios of 163/191 fragments can be used to distinguish the epimers, where the βαβ orientation has the highest, m/z 163/191 ratio. Further, the D/E ring ratios can be used to create a hierarchy of epimer maturity. From this, it is believed that the ααβ epimer is the first-formed, diagenetically, supported also by its percent dominance in younger shales. 28,30-bisnorhopane is created independently from kerogen, instead derived from bitumen, unbound as free oil-hydrocarbons. As such, as oil generation increases with source maturation, the concentration of 28,30-bisnorhopane decreases. Bisnorhopane may not be a reliable diagnostic for oil maturity due to microbial biodegradation.
Nomenclature
Norhopanes are a family of demethylated hopanes, identical to the methylated hopane structure, minus indicated desmet
Document 2:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 3:::
Glycerol dialkyl glycerol tetraether lipids (GDGTs) are a class of membrane lipids synthesized by archaea and some bacteria, making them useful biomarkers for these organisms in the geological record. Their presence, structure, and relative abundances in natural materials can be useful as proxies for temperature, terrestrial organic matter input, and soil pH for past periods in Earth history. Some structural forms of GDGT form the basis for the TEX86 paleothermometer. Isoprenoid GDGTs, now known to be synthesized by many archaeal classes, were first discovered in extremophilic archaea cultures. Branched GDGTs, likely synthesized by acidobacteriota, were first discovered in a natural Dutch peat sample in 2000.
Chemical structure
The two primary structural classes of GDGTs are isoprenoid (isoGDGT) and branched (brGDGT), which refer to differences in the carbon skeleton structures. Isoprenoid compounds are numbered -0 through -8, with the numeral representing the number of cyclopentane rings present within the carbon skeleton structure. The exception is crenarchaeol, a Nitrososphaerota product with one cyclohexane ring moiety in addition to four cyclopentane rings. Branched GDGTs have zero, one, or two cyclopentane moieties and are further classified based the positioning of their branches. They are numbered with roman numerals and letters, with -I indicating structures with four modifications (i.e. either a branch or a cyclopentane moiety), -II indicating structures with five modifications, and -III indicating structures with six modifications. The suffix a after the roman numeral means one of its modifications is a cyclopentane moiety; b means two modifications are cyclopentane moieties. For example, GDGT-IIb is a compound with three branches and two cyclopentane moieties (a total of five modifications). GDGTs form as monolayers and with ether bonds to glycerol, as opposed to as bilayers and with ester bonds as is the case in eukaryotes and most bacteria.
Biologi
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Hydrocarbons in which all carbons are connected by single bonds are called?
A. acids
B. enzymes
C. lipids
D. alkanes
Answer:
|
|
scienceQA-5735
|
multiple_choice
|
What do these two changes have in common?
stapling an envelope shut
erosion caused by wind
|
[
"Both are chemical changes.",
"Both are only physical changes.",
"Both are caused by heating.",
"Both are caused by cooling."
] |
B
|
Step 1: Think about each change.
Stapling an envelope shut is a physical change. The envelope and the staple get new shapes. Both are still made of the same type of matter.
Erosion caused by wind is a physical change. The wind carries away tiny pieces of rock. But the pieces of rock do not become a different type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
stapling an envelope shut
erosion caused by wind
A. Both are chemical changes.
B. Both are only physical changes.
C. Both are caused by heating.
D. Both are caused by cooling.
Answer:
|
sciq-1205
|
multiple_choice
|
Worms grow to adult size without going through what stage?
|
[
"egg",
"development",
"larval",
"growth"
] |
C
|
Relavent Documents:
Document 0:::
Direct development is a concept in biology. It refers to forms of growth to adulthood that do not involve metamorphosis. An animal undergoes direct development if the immature organism resembles a small adult rather than having a distinct larval form. A frog that hatches out of its egg as a small frog undergoes direct development. A frog that hatches out of its egg as a tadpole does not.
Direct development is the opposite of complete metamorphosis. An animal undergoes complete metamorphosis if it becomes a non-moving thing, for example a pupa in a cocoon, between its larval and adult stages.
Examples
Most frogs in the genus Callulina hatch out of their eggs as froglets.
Springtails and mayflies, called ametabolous insects, undergo direct development.
Document 1:::
A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names).
Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults.
Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs.
In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity.
Examples
For animal larval juveniles, see larva
Juvenile birds or bats can be called fledglings
For cat juveniles, see kitten
For dog juveniles, see puppy
For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood
Document 2:::
Sexual maturity is the capability of an organism to reproduce. In humans, it is related to both puberty and adulthood. However, puberty is the process of biological sexual maturation, while the concept of adulthood is generally based on broader cultural definitions.
Most multicellular organisms are unable to sexually reproduce at birth (animals) or germination (e.g. plants): depending on the species, it may be days, weeks, or years until they have developed enough to be able to do so. Also, certain cues may trigger an organism to become sexually mature. They may be external, such as drought (certain plants), or internal, such as percentage of body fat (certain animals). (Such internal cues are not to be confused with hormones, which directly produce sexual maturity – the production/release of those hormones is triggered by such cues.)
Role of reproductive organs
Sexual maturity is brought about by a maturing of the reproductive organs and the production of gametes. It may also be accompanied by a growth spurt or other physical changes which distinguish the immature organism from its adult form. In animals these are termed secondary sex characteristics, and often represent an increase in sexual dimorphism.
After sexual maturity is achieved, some organisms become infertile, or even change their sex. Some organisms are hermaphrodites and may or may not be able to "completely" mature and/or to produce viable offspring. Also, while in many organisms sexual maturity is strongly linked to age, many other factors are involved, and it is possible for some to display most or all of the characteristics of the adult form without being sexually mature. Conversely it is also possible for the "immature" form of an organism to reproduce. This is called progenesis, in which sexual development occurs faster than other physiological development (in contrast, the term neoteny refers to when non-sexual development is slowed – but the result is the same - the retention of juvenile c
Document 3:::
This glossary of developmental biology is a list of definitions of terms and concepts commonly used in the study of developmental biology and related disciplines in biology, including embryology and reproductive biology, primarily as they pertain to vertebrate animals and particularly to humans and other mammals. The developmental biology of invertebrates, plants, fungi, and other organisms is treated in other articles; e.g. terms relating to the reproduction and development of insects are listed in Glossary of entomology, and those relating to plants are listed in Glossary of botany.
This glossary is intended as introductory material for novices; for more specific and technical detail, see the article corresponding to each term. Additional terms relevant to vertebrate reproduction and development may also be found in Glossary of biology, Glossary of cell biology, Glossary of genetics, and Glossary of evolutionary biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Introduction to developmental biology
Outline of developmental biology
Outline of cell biology
Glossary of biology
Glossary of cell biology
Glossary of genetics
Glossary of evolutionary biology
Document 4:::
A worm cast is a structure created by worms, typically on soils such as those on beaches that gives the appearance of multiple worms. They are also used to trace the location of one or more worms.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Worms grow to adult size without going through what stage?
A. egg
B. development
C. larval
D. growth
Answer:
|
|
sciq-2554
|
multiple_choice
|
What is the minimum depth in the aphotic zone?
|
[
"200 meters",
"10 meters",
"450 meters",
"150 meters"
] |
A
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2.
Exam
The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories:
Purpose
According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science."
Discontinuation
Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses.
Grade distribution
The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows:
Document 1:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
Document 4:::
Advanced Placement (AP) Statistics (also known as AP Stats) is a college-level high school statistics course offered in the United States through the College Board's Advanced Placement program. This course is equivalent to a one semester, non-calculus-based introductory college statistics course and is normally offered to sophomores, juniors and seniors in high school.
One of the College Board's more recent additions, the AP Statistics exam was first administered in May 1996 to supplement the AP program's math offerings, which had previously consisted of only AP Calculus AB and BC. In the United States, enrollment in AP Statistics classes has increased at a higher rate than in any other AP class.
Students may receive college credit or upper-level college course placement upon passing the three-hour exam ordinarily administered in May. The exam consists of a multiple-choice section and a free-response section that are both 90 minutes long. Each section is weighted equally in determining the students' composite scores.
History
The Advanced Placement program has offered students the opportunity to pursue college-level courses while in high school. Along with the Educational Testing Service, the College Board administered the first AP Statistics exam in May 1997. The course was first taught to students in the 1996-1997 academic year. Prior to that, the only mathematics courses offered in the AP program included AP Calculus AB and BC. Students who didn't have a strong background in college-level math, however, found the AP Calculus program inaccessible and sometimes declined to take a math course in their senior year. Since the number of students required to take statistics in college is almost as large as the number of students required to take calculus, the College Board decided to add an introductory statistics course to the AP program. Since the prerequisites for such a program doesn't require mathematical concepts beyond those typically taught in a second-year al
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the minimum depth in the aphotic zone?
A. 200 meters
B. 10 meters
C. 450 meters
D. 150 meters
Answer:
|
|
ai2_arc-975
|
multiple_choice
|
When a baby shakes a rattle, it makes a noise. Which form of energy was changed to sound energy?
|
[
"electrical",
"light",
"mechanical",
"heat"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When a baby shakes a rattle, it makes a noise. Which form of energy was changed to sound energy?
A. electrical
B. light
C. mechanical
D. heat
Answer:
|
|
sciq-7520
|
multiple_choice
|
What type of energy can be used to change the position or shape of an object, thus giving it potential energy?
|
[
"static energy",
"kinetic energy",
"harmonic energy",
"binary energy"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 2:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 3:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 4:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of energy can be used to change the position or shape of an object, thus giving it potential energy?
A. static energy
B. kinetic energy
C. harmonic energy
D. binary energy
Answer:
|
|
ai2_arc-960
|
multiple_choice
|
Some businesses offer customers the option to pay for merchandise using their fingerprints as identification. Which of the following would most benefit customers that use this new technology?
|
[
"cost of product is reduced",
"protection of private information",
"ability to track customer preferences",
"funds would be credited immediately"
] |
B
|
Relavent Documents:
Document 0:::
Biometric voter registration implicates using biometric technology (capturing unique physical features of an individual – fingerprinting is the most commonly used), most of the times in addition to demographics of the voter, for polling registration and/or authentication. The enrollment infrastructure allows collecting and maintaining a database of the biometric templates for all voters.
A biometric voting project might include introducing biometric registration kits for enrolment of voters; using electronic voter identification devices before and on Election Day; issuing of voter identification documents (i.e. biometric voter cards), among others. The chronological stages for adopting a biometric voting registration project usually include assessment; feasibility studies; securing funding; reviewing legislation; doing pilot projects and mock registration exercises; procurement; distribution of equipment, installation, and testing; recruitment and training of staff; voter information; deployment and, post-election audits.
The final aim of implementing biometric election technology is achieving de-duplication of the voting register, thus preventing multiple voter registration and multiple voting; improving identification of the voter at the polling station, and mitigating the incidence of voter fraud (e.g. buy/rent of voters IDs before an election).
However, it is vital that commissions carrying out these election projects first and foremost guarantee that the legal framework supports biometric voter identification, and then that the data captured during the registration process will be secured while maintaining two basic requirements: personalization and privacy. Likewise, it is imperative to have contingency mechanisms in place, in case biometric systems malfunction. One of the main challenges is to ensure that given the eventualities of technological hitches and failures, not a single voter is disenfranchised.
Countries with biometric voter registration
Accord
Document 1:::
Fingerprint scanners are security systems of biometrics. They are used in police stations, security industries, smartphones, and other mobile devices.
Fingerprints
People have patterns of friction ridges on their fingers, these patterns are called the fingerprints. Fingerprints are uniquely detailed, durable over an individual's lifetime, and difficult to alter. Due to the unique combinations, fingerprints have become an ideal means of identification.
Types of fingerprint scanners
There are four types of fingerprint scanners: optical scanners, capacitance scanners, ultrasonic scanners, and thermal scanners. The basic function of every type of scanner is to obtain an image of a person's fingerprint and find a match for it in its database. The measure of the fingerprint image quality is in dots per inch (DPI).
Optical scanners take a visual image of the fingerprint using a digital camera.
Capacitive or CMOS scanners use capacitors and thus electric current to form an image of the fingerprint. This type of scanner tends to excel in terms of precision.
Ultrasonic fingerprint scanners use high frequency sound waves to penetrate the epidermal (outer) layer of the skin.
Thermal scanners sense the temperature differences on the contact surface, in between fingerprint ridges and valleys.
All fingerprint scanners are susceptible to be fooled by a technique that involves photographing fingerprints, processing the photographs using special software, and printing fingerprint replicas using a 3D printer.
Construction forms
There are two construction forms: the stagnant and the moving fingerprint scanner.
Stagnant: The finger must be dragged over the small scanning area. This is cheaper and less reliable than the moving form. Imaging can be less than ideal when the finger is not dragged over the scanning area at constant speed.
Moving: The finger lies on the scanning area while the scanner runs underneath. Because the scanner moves at constant speed over the fingerpri
Document 2:::
Biometric tokenization is the process of substituting a stored biometric template with a non-sensitive equivalent, called a token, that lacks extrinsic or exploitable meaning or value. The process combines the biometrics with public-key cryptography to enable the use of a stored biometric template (e.g., fingerprint image on a mobile or desktop device) for secure or strong authentication to applications or other systems without presenting the template in its original, replicable form.
Biometric tokenization in particular builds upon the longstanding practice of tokenization for sequestering secrets in this manner by having the secret, such as user credentials like usernames and passwords or other Personally Identifiable Information (PII), be represented by a substitute key in the public sphere.
The technology is most closely associated with authentication to online applications such as those running on desktop computers, mobile devices, and Internet of Things (IoT) nodes. Specific use cases include secure login, payments, physical access, management of smart, connected products such as connected homes and connected cars, as well as adding a biometric component to two-factor authentication and multi-factor authentication.
Origins
With the September 9, 2014 launch of its Apple Pay service, Cupertino, Calif.-based Apple, Inc. initiated the conversation surrounding use biometricsupported tokenization of payment data for point of sale retail transactions. Apple Pay tokenizes mobile users’ virtualized bank card data in order to wirelessly transmit a payment, represented as a token, to participating retailers that support Apple Pay (e.g. through partnerships and supported hardware). Apple Pay leverages its proprietary Touch ID fingerprint scanner on its proprietary iPhone line with, aside from cryptography, the added security of its Apple A7 system on a chip that includes a Secure Enclave hardware feature that stores and protects the data from the Touch ID fingerprint
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.
History
Biometric devices have been in use for thousands of years. Non-automated biometric devices have in use since 500 BC, when ancient Babylonians would sign their business transactions by pressing their fingertips into clay tablets.
Automation in biometric devices was first seen in the 1960s. The Federal Bureau of Investigation (FBI) in the 1960s, introduced the Indentimat, which started checking for fingerprints to maintain criminal records. The first systems measured the shape of the hand and the length of the fingers. Although discontinued in the 1980s, the system set a precedent for future Biometric Devices.
Types of biometric devices
There are two categories of biometric devices,
Contact Devices - These types of devices need contact of body part of live persons. They are mainly fingerprint scanners, either single fingerprint, dual fingerprint or slap (4+4+2) fingerprint scanners, and hand geometry scanners.
Contactless Devices - These devices don't need any type of contact. The main examples of these are face, iris, retina and palm vein scanners and voice identification devices.
Subgroups
The characteristic of the human body is used to access information by the users. According to these characteristics, the sub-divided groups are
Chemical biometric devices: Analyses the segments of the DNA to grant access to the users.
Visual biometric devices: Analyses the visual features of the humans to grant access which includes iris recognition, face recognition, Finger recognition, and Retina Recognition.
Behavioral biometric devices: Analyses the Walking Ability and Signatures (velocity of sign, width of sign, pressure of sign) distinct to
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Some businesses offer customers the option to pay for merchandise using their fingerprints as identification. Which of the following would most benefit customers that use this new technology?
A. cost of product is reduced
B. protection of private information
C. ability to track customer preferences
D. funds would be credited immediately
Answer:
|
|
ai2_arc-154
|
multiple_choice
|
Plant and animal life cycles are alike because they both
|
[
"begin as eggs.",
"require the same amount of time.",
"have beginning, growing, and mature stages.",
"resemble their parents from the beginning stages."
] |
C
|
Relavent Documents:
Document 0:::
This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
X
Y
Z
Related to this search
Index of biology articles
Outline of biology
Glossaries of sub-disciplines and related fields:
Glossary of botany
Glossary of ecology
Glossary of entomology
Glossary of environmental science
Glossary of genetics
Glossary of ichthyology
Glossary of ornithology
Glossary of scientific naming
Glossary of speciation
Glossary of virology
Document 1:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 2:::
This glossary of developmental biology is a list of definitions of terms and concepts commonly used in the study of developmental biology and related disciplines in biology, including embryology and reproductive biology, primarily as they pertain to vertebrate animals and particularly to humans and other mammals. The developmental biology of invertebrates, plants, fungi, and other organisms is treated in other articles; e.g. terms relating to the reproduction and development of insects are listed in Glossary of entomology, and those relating to plants are listed in Glossary of botany.
This glossary is intended as introductory material for novices; for more specific and technical detail, see the article corresponding to each term. Additional terms relevant to vertebrate reproduction and development may also be found in Glossary of biology, Glossary of cell biology, Glossary of genetics, and Glossary of evolutionary biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Introduction to developmental biology
Outline of developmental biology
Outline of cell biology
Glossary of biology
Glossary of cell biology
Glossary of genetics
Glossary of evolutionary biology
Document 3:::
Ecology: From Individuals to Ecosystems is a 2006 higher education textbook on general ecology written by Michael Begon, Colin R. Townsend and John L. Harper. Published by Blackwell Publishing, it is now in its fourth edition. The first three editions were published by Blackwell Science under the title Ecology: Individuals, Populations and Communities. Since it first became available it has had a positive reception, and has long been one of the leading textbooks on ecology.
Background and history
The book is written by Michael Begon of the University of Liverpool's School of Biosciences, Colin Townsend, from the Department of Zoology of New Zealand's University of Otago, and the University of Exeter's John L. Harper. The first edition was published in 1986. This was followed in 1990 with a second edition. The third edition became available in 1996. The most recent edition appeared in 2006 under the new subtitle From Individuals to Ecosystems.
One of the book's authors, John L. Harper, is now deceased. The fourth edition cover is an image of a mural on a Wellington street created by Christopher Meech and a group of urban artists to generate thought about the topic of environmental degradation. It reads "we did not inherit the earth from our ancestors, we borrowed it from our children."
Contents
Part 1. ORGANISMS
1. Organisms in their environments: the evolutionary backdrop
2. Conditions
3. Resources
4. Life, death and life histories
5. Intraspecific competition
6. Dispersal, dormancy and metapopulations
7. Ecological applications at the level of organisms and single-species populations
Part 2. SPECIES INTERACTIONS
8. Interspecific competition
9. The nature of predation
10. The population dynamics of predation
11. Decomposers and detritivores
12. Parasitism and disease
13. Symbiosis and mutualism
14. Abundance
15. Ecological applications at the level of population interactions
Part 3. COMMUNITIES AND ECOSYSTEMS
16. The nature of the community
17.
Document 4:::
In ecology, functional equivalence (or functional redundancy) is the ecological phenomenon that multiple species representing a variety of taxonomic groups can share similar, if not identical, roles in ecosystem functionality (e.g., nitrogen fixers, algae scrapers, scavengers). This phenomenon can apply to both plant and animal taxa. The idea was originally presented in 2005 by Stephen Hubbell, a plant ecologist at the University of Georgia. This idea has led to a new paradigm for species-level classification – organizing species into groups based on functional similarity rather than morphological or evolutionary history. In the natural world, several examples of functional equivalence among different taxa have emerged analogously.
Plant-pollinator relationships
One example of functional equivalence is demonstrated in plant-pollinator relationships, whereby a certain plant species may evolve flower morphology that selects for pollination by a host of taxonomically-unrelated species to provide the same function (fruit production following pollination). For example, the herbaceous plant spiny madwort (Hormathophylla spinosa) grows flowers that are shaped so that taxonomically unrelated pollinators behave almost identically during pollination. From the plant's perspective, each of these pollinators are functionally equivalent and thus are not subjected to specific selective pressures Variation in the shape and structure of both flower and seed morphology can be a source of selective pressure for animal species to evolve a variety of morphological features, yet also provide the same function to the plant.
Plant-animal seed dispersal mechanisms
Plant-animal interactions in terms of seed dispersal are another example of functional equivalence. Evidence has shown that, over the course of millions of years, most plants have maintained evolutionary trait stability in terms of the size and shape of their fruits. However, the animal species that consume and disperse the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Plant and animal life cycles are alike because they both
A. begin as eggs.
B. require the same amount of time.
C. have beginning, growing, and mature stages.
D. resemble their parents from the beginning stages.
Answer:
|
|
sciq-9725
|
multiple_choice
|
How can nuclear fusion in stars be simulated?
|
[
"nuclear reactors",
"plutonium accelerators",
"nitrogen accelerators",
"particle accelerators"
] |
D
|
Relavent Documents:
Document 0:::
A linear transformer driver (LTD) within physics and energy, is an annular parallel connection of switches and capacitors. The driver is designed to deliver rapid high power pulses. The LTD was invented at the Institute of High Current Electronics (IHCE) in Tomsk, Russia. The LTD is capable of producing high current pulses, up to 1 mega amps (106 ampere), with a risetime of less than 100 ns. This is an improvement over Marx generator based pulsed power devices which require pulse compression to achieve such fast risetimes. It is being considered as a driver for z-pinch based inertial confinement fusion.
LTDs at Sandia National Laboratories
Sandia National Laboratory is currently investigating a z-pinch as a possible ignition source for inertial confinement fusion. On its "Z machine", Sandia can achieve dense, high temperature plasmas by firing fast, 100-nanosecond current pulses exceeding 20 million amps through hundreds of tungsten wires with diameters on the order of tens of micrometres. The LTD is currently being investigated as a driver for the next generation of high power accelerators.
Sandia's roadmap includes another future Z machine version called ZN (Z Neutron) to test higher yields in fusion power and automation systems. ZN is planned to give between 20 and 30 MJ of hydrogen fusion power with a shot per hour thanks to LTDs replacing the current Marx generators. After 8 to 10 years of operation, ZN would become a transmutation pilot plant capable of a fusion shot every 100 seconds.
The next step planned would be the Z-IFE (Z-inertial fusion energy) test facility, the first true z-pinch driven prototype fusion power plant. It is suggested it would integrate Sandia's latest designs using LTDs. Sandia labs recently proposed a conceptual 1 petawatt (1015 watts) LTD Z-pinch power plant, where the electric discharge would reach 70 million amperes.
See also
Document 1:::
FuseNet is an educational organization funded by the European Union focused on fusion.
The FP7 Project
The purpose of FuseNet is to coordinate and facilitate fusion education, to share best practices, to jointly develop educational tools, to organize educational events. The members of FuseNet have jointly established academic criteria for the award of European Fusion Doctorate and Master Certificates. These criteria are set to stimulate a high level of fusion education throughout Europe.
The Association
FuseNet is the umbrella organization and single voice for the training and education of the next generation fusion engineers and scientists. FuseNet is recognized as such by the European Commission.
Document 2:::
Fusion ignition is the point at which a nuclear fusion reaction becomes self-sustaining. This occurs when the energy being given off by the reaction heats the fuel mass more rapidly than it cools. In other words, fusion ignition is the point at which the increasing self-heating of the nuclear fusion removes the need for external heating.
This is quantified by the Lawson criterion.
Ignition can also be defined by the fusion energy gain factor.
In the laboratory, fusion ignition defined by the Lawson criterion was first achieved in August 2021,
and ignition defined by the energy gain factor was achieved in December 2022,
both by the U.S. National Ignition Facility.
Research
Ignition should not be confused with breakeven, a similar concept that compares the total energy being given off to the energy being used to heat the fuel. The key difference is that breakeven ignores losses to the surroundings, which do not contribute to heating the fuel, and thus are not able to make the reaction self-sustaining. Breakeven is an important goal in the fusion energy field, but ignition is required for a practical energy producing design.
In nature, stars reach ignition at temperatures similar to that of the Sun, around 15 million kelvins (27 million degrees F). Stars are so large that the fusion products will almost always interact with the plasma before their energy can be lost to the environment at the outside of the star. In comparison, man-made reactors are far less dense and much smaller, allowing the fusion products to easily escape the fuel. To offset this, much higher rates of fusion are required, and thus much higher temperatures; most man-made fusion reactors are designed to work at temperatures over 100 million kelvins (180 million degrees F).
Lawrence Livermore National Laboratory has its 1.8 MJ laser system running at full power. This laser system is designed to compress and heat a mixture of deuterium and tritium, which are both isotopes of hydrogen, in order to
Document 3:::
The Santa Cruz Institute for Particle Physics (SCIPP) is an organized research unit within the University of California system focused on theoretical and experimental high-energy physics and astrophysics.
Research
SCIPP's scientific and technical staff are and have been involved in several cutting edge research projects for more than 25 years, in both theory and experiment. The primary focus is particle physics and particle astrophysics, including the development of technologies needed to advance that research. SCIPP is also pursuing the application of those technologies to other scientific fields such as neuroscience and biomedicine. The Institute is recognized as a leader in the development of custom readout electronics and silicon micro-strip sensors for state-of-the-art particle detection systems. This department has several faculty associated with the Stanford Linear Accelerator Center (SLAC) or the ATLAS project at CERN.
There are many experiments being performed at any time within SCIPP but many center on Silicone Strip Particle Detectors and their properties before and after radioactive exposure. Also many of the faculty work on monte carlo simulations and tracking particles within particle colliders. Their most prominent project in recent history has been the development of the Gamma-ray Large Area Space Telescope (GLAST) which searches the sky for Gamma Ray Bursts.
Members
Notable faculty include:
Anthony Aguirre, theoretical cosmologist
Tom Banks, c-discoverer of M(atrix) theory in string theory
George Blumenthal, astronomer, chancellor of UCSC
Michael Dine, high-energy theorist, recipient of Sakurai prize, physics department chair
Howard Haber, theoretical particle physicist, recipient of Sakurai prize
Piero Madau, recipient of Dannie Heineman Prize for Astrophysics
Joel Primack, quantum field theorist and cosmologist, director of AstroComputing Center
Constance Rockosi, chair of astronomy department
Terry Schalk
Document 4:::
Nuclear Fusion is a peer reviewed international scientific journal that publishes articles, letters and review articles, special issue articles, conferences summaries and book reviews on the theoretical and practical research based on controlled thermonuclear fusion. The journal was first published in September, 1960 by IAEA and its head office was housed at the headquarter of IAEA in Vienna, Austria. Since 2002, the journal has been jointly published by IAEA and IOP Publishing.
The Nuclear Fusion Award
Since 2006, this award has been given each year to a particular paper of highest standard. The editorial board selects the recipient from all the research papers published in the Nuclear Fusion journal two years prior to the award year. The list of recipients are -
Tim Luce (2006)
Clemente Angioni, Max Planck Institute for Plasma Physics, Germany (2007)
Todd Evans (2008)
Steve Sabbagh, Columbia University, USA (2009)
John Rice, Massachusetts Institute of Technology (2010)
Hajime Urano, JAEA, Japan (2011)
Pat Diamond, University of California at San Diego (2012)
Dennis Whyte, Massachusetts Institute of Technology, USA (2013)
Phil Snyder, General Atomics, USA (2014)
Robert Goldston, Princeton Plasma Physics Laboratory, USA (2015)
Sebastijan Brezinsek, EUROfusion Consortium and Forschungszentrum Jülich, Germany (2016)
Francois Ryter, Max Planck Institute for Plasma Physics, Germany (2017)
A. Kallenbach, Max Planck Institute for Plasma Physics, Germany (2018)
N.T. Howard, Massachusetts Institute of Technology, USA (2019)
C. Theiler, Swiss Plasma Center, Switzerland (2020)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How can nuclear fusion in stars be simulated?
A. nuclear reactors
B. plutonium accelerators
C. nitrogen accelerators
D. particle accelerators
Answer:
|
|
sciq-9883
|
multiple_choice
|
What organ system is different in men and women?
|
[
"reproductive organs",
"respratory organs",
"lymphatic organs",
"nervous organs"
] |
A
|
Relavent Documents:
Document 0:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 1:::
This list of related male and female reproductive organs shows how the male and female reproductive organs and the development of the reproductive system are related, sharing a common developmental path. This makes them biological homologues. These organs differentiate into the respective sex organs in males and females.
List
Internal organs
External organs
The external genitalia of both males and females have similar origins. They arise from the genital tubercle that forms anterior to the cloacal folds (proliferating mesenchymal cells around the cloacal membrane). The caudal aspect of the cloacal folds further subdivides into the posterior anal folds and the anterior urethral folds. Bilateral to the urethral fold, genital swellings (tubercles) become prominent. These structures are the future scrotum and labia majora in males and females, respectively.
The genital tubercles of an eight-week-old embryo of either sex are identical. They both have a glans area, which will go on to form the glans clitoridis (females) or glans penis (males), a urogenital fold and groove, and an anal tubercle. At around ten weeks, the external genitalia are still similar. At the base of the glans, there is a groove known as the coronal sulcus or corona glandis. It is the site of attachment of the future prepuce. Just anterior to the anal tubercle, the caudal end of the left and right urethral folds fuse to form the urethral raphe. The lateral part of the genital tubercle (called the lateral tubercle) grows longitudinally and is about the same length in either sex.
Human physiology
The male external genitalia include the penis and the scrotum. The female external genitalia include the clitoris, the labia, and the vaginal opening, which are collectively called the vulva. External genitalia vary widely in external appearance among different people.
One difference between the glans penis and the glans clitoridis is that the glans clitoridis packs nerve endings into a volume only about
Document 2:::
This article contains a list of organs of the human body. A general consensus is widely believed to be 79 organs (this number goes up if you count each bone and muscle as an organ on their own, which is becoming more common practice to do); however, there is no universal standard definition of what constitutes an organ, and some tissue groups' status as one is debated. Since there is no single standard definition of what an organ is, the number of organs varies depending on how one defines an organ. For example, this list contains more than 79 organs (about ~103).
It is still not clear which definition of an organ is used for all the organs in this list, it seemed that it may have been compiled based on what wikipedia articles were available on organs.
Musculoskeletal system
Skeleton
Joints
Ligaments
Muscular system
Tendons
Digestive system
Mouth
Teeth
Tongue
Lips
Salivary glands
Parotid glands
Submandibular glands
Sublingual glands
Pharynx
Esophagus
Stomach
Small intestine
Duodenum
Jejunum
Ileum
Large intestine
Cecum
Ascending colon
Transverse colon
Descending colon
Sigmoid colon
Rectum
Liver
Gallbladder
Mesentery
Pancreas
Anal canal
Appendix
Respiratory system
Nasal cavity
Pharynx
Larynx
Trachea
Bronchi
Bronchioles and smaller air passages
Lungs
Muscles of breathing
Urinary system
Kidneys
Ureter
Bladder
Urethra
Reproductive systems
Female reproductive system
Internal reproductive organs
Ovaries
Fallopian tubes
Uterus
Cervix
Vagina
External reproductive organs
Vulva
Clitoris
Male reproductive system
Internal reproductive organs
Testicles
Epididymis
Vas deferens
Prostate
External reproductive organs
Penis
Scrotum
Endocrine system
Pituitary gland
Pineal gland
Thyroid gland
Parathyroid glands
Adrenal glands
Pancreas
Circulatory system
Circulatory system
Heart
Arteries
Veins
Capillaries
Lymphatic system
Lymphatic vessel
Lymph node
Bone marrow
Thymus
Spleen
Gut-associated lymphoid tissue
Tonsils
Interstitium
Nervous system
Central nervous system
Document 3:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
Document 4:::
Instruments used in Anatomy dissections are as follows:
Instrument list
Image gallery
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What organ system is different in men and women?
A. reproductive organs
B. respratory organs
C. lymphatic organs
D. nervous organs
Answer:
|
|
ai2_arc-567
|
multiple_choice
|
Which measurement describes the motion of a rubber ball?
|
[
"5 cm",
"10 m/s",
"15 newtons",
"50 grams"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which measurement describes the motion of a rubber ball?
A. 5 cm
B. 10 m/s
C. 15 newtons
D. 50 grams
Answer:
|
|
sciq-2322
|
multiple_choice
|
The atmosphere consists of oxygen, nitrogen, carbon dioxide, which exerts a certain pressure referred to as what?
|
[
"tidal pressure",
"gravity pressure",
"atmospheric pressure",
"nitrogen pressure"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
The ambient pressure on an object is the pressure of the surrounding medium, such as a gas or liquid, in contact with the object.
Atmosphere
Within the atmosphere, the ambient pressure decreases as elevation increases. By measuring ambient atmospheric pressure, a pilot may determine altitude (see pitot-static system). Near sea level, a change in ambient pressure of 1 millibar is taken to represent a change in height of .
Underwater
The ambient pressure in water with a free surface is a combination of the hydrostatic pressure due to the weight of the water column and the atmospheric pressure on the free surface. This increases approximately linearly with depth. Since water is much denser than air, much greater changes in ambient pressure can be experienced under water. Each of depth adds another bar to the ambient pressure.
Ambient pressure diving is underwater diving exposed to the water pressure at depth, rather than in a pressure-excluding atmospheric diving suit or a submersible.
Other environments
The concept is not limited to environments frequented by people. Almost any place in the universe will have an ambient pressure, from the hard vacuum of deep space to the interior of an exploding supernova. At extremely small scales the concept of pressure becomes irrelevant, and it is undefined at a gravitational singularity.
Units of pressure
The SI unit of pressure is the pascal (Pa), which is a very small unit relative to atmospheric pressure on Earth, so kilopascals (kPa) are more commonly used in this context. The ambient atmospheric pressure at sea level is not constant: it varies with the weather, but averages around 100 kPa. In fields such as meteorology and underwater diving, it is common to see ambient pressure expressed in bar or millibar. One bar is 100 kPa or approximately ambient pressure at sea level. Ambient pressure may in other circumstances be measured in pounds per square inch (psi) or in standard atmospheres (atm). The ambient pressure at s
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The atmosphere consists of oxygen, nitrogen, carbon dioxide, which exerts a certain pressure referred to as what?
A. tidal pressure
B. gravity pressure
C. atmospheric pressure
D. nitrogen pressure
Answer:
|
|
sciq-5376
|
multiple_choice
|
Animals that do not have internal control of their body temperature are called what?
|
[
"cold-blooded",
"ectotherms",
"photophores",
"athermal"
] |
B
|
Relavent Documents:
Document 0:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 1:::
A eurytherm is an organism, often an endotherm, that can function at a wide range of ambient temperatures. To be considered a eurytherm, all stages of an organism's life cycle must be considered, including juvenile and larval stages. These wide ranges of tolerable temperatures are directly derived from the tolerance of a given eurythermal organism's proteins. Extreme examples of eurytherms include Tardigrades (Tardigrada), the desert pupfish (Cyprinodon macularis), and green crabs (Carcinus maenas), however, nearly all mammals, including humans, are considered eurytherms. Eurythermy can be an evolutionary advantage: adaptations to cold temperatures, called cold-eurythemy, are seen as essential for the survival of species during ice ages. In addition, the ability to survive in a wide range of temperatures increases a species' ability to inhabit other areas, an advantage for natural selection.
Eurythermy is an aspect of thermoregulation in organisms. It is in contrast with the idea of stenothermic organisms, which can only operate within a relatively narrow range of ambient temperatures. Through a wide variety of thermal coping mechanisms, eurythermic organisms can either provide or expel heat for themselves in order to survive in cold or hot, respectively, or otherwise prepare themselves for extreme temperatures. Certain species of eurytherm have been shown to have unique protein synthesis processes that differentiate them from relatively stenothermic, but otherwise similar, species.
Examples
Tardigrades, known for their ability to survive in nearly any environment, are extreme examples of eurytherms. Certain species of tardigrade, including Mi. tardigradum, are able to withstand and survive temperatures ranging from –273 °C (near absolute zero) to 150 °C in their anhydrobiotic state.
The desert pupfish, a rare bony fish that occupies places like the Colorado River Delta in Baja California, small ponds in Sonora, Mexico, and drainage sites near the Salton Sea
Document 2:::
Dormancy is a period in an organism's life cycle when growth, development, and (in animals) physical activity are temporarily stopped. This minimizes metabolic activity and therefore helps an organism to conserve energy. Dormancy tends to be closely associated with environmental conditions. Organisms can synchronize entry to a dormant phase with their environment through predictive or consequential means. Predictive dormancy occurs when an organism enters a dormant phase before the onset of adverse conditions. For example, photoperiod and decreasing temperature are used by many plants to predict the onset of winter. Consequential dormancy occurs when organisms enter a dormant phase after adverse conditions have arisen. This is commonly found in areas with an unpredictable climate. While very sudden changes in conditions may lead to a high mortality rate among animals relying on consequential dormancy, its use can be advantageous, as organisms remain active longer and are therefore able to make greater use of available resources.
Animals
Hibernation
Hibernation is a mechanism used by many mammals to reduce energy expenditure and survive food shortages over the winter. Hibernation may be predictive or consequential. An animal prepares for hibernation by building up a thick layer of body fat during late summer and autumn that will provide it with energy during the dormant period. During hibernation, the animal undergoes many physiological changes, including decreased heart rate (by as much as 95%) and decreased body temperature. In addition to shivering, some hibernating animals also produce body heat by non-shivering thermogenesis to avoid freezing. Non-shivering thermogenesis is a regulated process in which the proton gradient generated by electron transport in mitochondria is used to produce heat instead of ATP in brown adipose tissue. Animals that hibernate include bats, ground squirrels and other rodents, mouse lemurs, the European hedgehog and other insectivo
Document 3:::
A stenotherm (from Greek στενός stenos "narrow" and θέρμη therme "heat") is a species or living organism only capable of living or surviving within a narrow temperature range. This type of temperature specialization is often seen in organisms that live in environments where the temperature is relatively stable, such as in deep sea environments or in polar regions.
The opposite of a stenotherm is a eurytherm, an organism that can function at a wide range of different body temperatures. Eurythermic organisms are typically found in environments where the temperature varies more significantly, such as in temperate or tropical regions.
The size, shape, and composition of an organism's body can affect its temperature regulation, with larger organisms tending to have a more stable internal temperature than smaller organisms.
Examples
Chionoecetes opilio is a stenothermic organism, and temperature affects its biology throughout its life history, from embryo to adult. Small changes in temperature (< 2 °C) can increase the duration of egg incubation for C. opilio by a full year.
See also
Ecotope
Document 4:::
An endotherm (from Greek ἔνδον endon "within" and θέρμη thermē "heat") is an organism that maintains its body at a metabolically favorable temperature, largely by the use of heat released by its internal bodily functions instead of relying almost purely on ambient heat. Such internally generated heat is mainly an incidental product of the animal's routine metabolism, but under conditions of excessive cold or low activity an endotherm might apply special mechanisms adapted specifically to heat production. Examples include special-function muscular exertion such as shivering, and uncoupled oxidative metabolism, such as within brown adipose tissue.
Only birds and mammals are extant universally endothermic groups of animals. However, Argentine black and white tegu, leatherback sea turtles, lamnid sharks, tuna and billfishes, cicadas, and winter moths are also endothermic. Unlike mammals and birds, some reptiles, particularly some species of python and tegu, possess seasonal reproductive endothermy in which they are endothermic only during their reproductive season.
In common parlance, endotherms are characterized as "warm-blooded". The opposite of endothermy is ectothermy, although in general, there is no absolute or clear separation between the nature of endotherms and ectotherms.
Origin
Endothermy was thought to have originated towards the end of the Permian Period. One recent study claimed the origin of endothermy within Synapsida (the mammalian lineage) was among Mammaliamorpha, a node calibrated during the Late Triassic period, about 233 million years ago. Another study instead argued that endothermy only appeared later, during the Middle Jurassic, among crown-group mammals.
Evidence for endothermy has been found in basal synapsids ("pelycosaurs"), pareiasaurs, ichthyosaurs, plesiosaurs, mosasaurs, and basal archosauromorphs. Even the earliest amniotes might have been endotherms.
Mechanisms
Generating and conserving heat
Many endotherms have a larger amount
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Animals that do not have internal control of their body temperature are called what?
A. cold-blooded
B. ectotherms
C. photophores
D. athermal
Answer:
|
|
sciq-7608
|
multiple_choice
|
Oceans are made of a solution of what?
|
[
"salt and water",
"salt and carbon",
"water and carbon",
"salt and algae"
] |
A
|
Relavent Documents:
Document 0:::
The borders of the oceans are the limits of Earth's oceanic waters. The definition and number of oceans can vary depending on the adopted criteria. The principal divisions (in descending order of area) of the five oceans are the Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water.
See also the list of seas article for the seas included in each ocean area.
Overview
Though generally described as several separate oceans, the world's oceanic waters constitute one global, interconnected body of salt water sometimes referred to as the World Ocean or Global Ocean. This concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.
The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria. The principal divisions (in descending order of area) are the: Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms.
Geologically, an ocean is an area of oceanic crust covered by water. Oceanic crust is the thin layer of solidified volcanic basalt that covers the Earth's mantle. Continental crust is thicker but less dense. From this perspective, the Earth has three oceans: the World Ocean, the Caspian Sea, and the Black Sea. The latter two were formed by the collision of Cimmeria with Laurasia. The Mediterranean Sea is at times a discrete ocean because tectonic plate movement has repeatedly broken its connection to the World Ocean through the Strait of Gibraltar. The Black Sea is connected to the Mediterranean through the Bosporus, but the Bosporus is a natural canal cut through continental rock some 7,000 years ago, rather than a piece of oceanic sea floo
Document 1:::
In oceanography, terrigenous sediments are those derived from the erosion of rocks on land; that is, they are derived from terrestrial (as opposed to marine) environments. Consisting of sand, mud, and silt carried to sea by rivers, their composition is usually related to their source rocks; deposition of these sediments is largely limited to the continental shelf.
Sources of terrigenous sediments include volcanoes, weathering of rocks, wind-blown dust, grinding by glaciers, and sediment carried by rivers or icebergs.
Terrigenous sediments are responsible for a significant amount of the salt in today's oceans. Over time rivers continue to carry minerals to the ocean but when water evaporates, it leaves the minerals behind. Since chlorine and sodium are not consumed by biological processes, these two elements constitute the greatest portion of dissolved minerals.
Quantity
Some 1.35 billion tons, or 8% of global river-borne sediment (16.5-17 billion tons globally), is transported by Ganges-Brahmaputra river system annually according to decades old studies, it is unquantified how much variance year to year as well as the impact modern humans have on this amount by holding back sediment in dams, counteracted with increased development of erosion patterns. Wind born sediment also transports billions of tons annually, most prominent in Saharan dust, but thought to be substantially less than rivers; again, variance of year to year and human impacts of land use remain unquantified on this data. It is well known terrain influences climate conditions, and erosive processes slowly but surely modify terrain along with tectonic causes, but all encompassing studies have been lacking on a global scale to understand how these shape of land and sea factors fit in with both human induced climate change and natural geo-astrological climate variability.
See also
Pelagic sediments
Biogenous Ooze
Document 2:::
An ecosphere is a planetary closed ecological system. In this global ecosystem, the various forms of energy and matter that constitute a given planet interact on a continual basis. The forces of the four Fundamental interactions cause the various forms of matter to settle into identifiable layers. These layers are referred to as component spheres with the type and extent of each component sphere varying significantly from one particular ecosphere to another. Component spheres that represent a significant portion of an ecosphere are referred to as a primary component spheres. For instance, Earth's ecosphere consists of five primary component spheres which are the Geosphere, Hydrosphere, Biosphere, Atmosphere, and Magnetosphere.
Types of component spheres
Geosphere
The layer of an ecosphere that exists at a Terrestrial planet's Center of mass and which extends radially outward until ending in a solid and spherical layer known as the Crust (geology).
This includes rocks and minerals that are present on the Earth as well as parts of soil and skeletal remains of animals that have become fossilized over the years. This is all about process how rocks metamorphosize. They go through solids to weathered to washing away and back to being buried and resurrected. The primary agent driving these processes is the movement of Earth’s tectonic plates, which creates mountains, volcanoes, and ocean basins. The inner core of the Earth contains liquid iron, which is an important factor in the geosphere as well as the magnetosphere.
Hydrosphere
The total mass of water, regardless of phase (e.g. liquid, solid, gas), that exists within an ecosphere. It's possible for the hydrosphere to be highly distributed throughout other component spheres such as the geosphere and atmosphere.
There are about 1.4 billion km of water on Earth. That includes liquid water in the ocean, lakes, and rivers. It includes frozen water in snow, ice, and glaciers, and water that’s underground in soils and rocks
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Oceans are made of a solution of what?
A. salt and water
B. salt and carbon
C. water and carbon
D. salt and algae
Answer:
|
|
sciq-10452
|
multiple_choice
|
Where does the majority of chemical digestion occur?
|
[
"small intestine",
"large intestine",
"mouth",
"stomach"
] |
A
|
Relavent Documents:
Document 0:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 1:::
The large intestine, also known as the large bowel, is the last part of the gastrointestinal tract and of the digestive system in tetrapods. Water is absorbed here and the remaining waste material is stored in the rectum as feces before being removed by defecation. The colon is the longest portion of the large intestine, and the terms are often used interchangeably but most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the anal canal.
In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about long, which is about one-fifth of the whole length of the human gastrointestinal tract.
Structure
The colon of the large intestine is the last part of the digestive system. It has a segmented appearance due to a series of saccules called haustra. It extracts water and salt from solid wastes before they are eliminated from the body and is the site in which the fermentation of unabsorbed material by the gut microbiota occurs. Unlike the small intestine, the colon does not play a major role in absorption of foods and nutrients. About 1.5 litres or 45 ounces of water arrives in the colon each day.
The colon is the longest part of the large intestine and its average length in the adult human is 65 inches or 166 cm (range of 80 to 313 cm) for males, and 61 inches or 155 cm (range of 80 to 214 cm) for females.
Sections
In mammals, the large intestine consists of the cecum (including the appendix), colon (the longest part), rectum, and anal canal.
The four sections of the colon are: the ascending colon, transverse colon, desce
Document 2:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
Document 3:::
Digestion chambers are a histologic finding in nerves that are undergoing Wallerian degeneration.
Appearance
Digestion chambers consist of small globular fragments, which represent degenerating myelin sheaths.
See also
Nerve injury
Document 4:::
Gastrointestinal physiology is the branch of human physiology that addresses the physical function of the gastrointestinal (GI) tract. The function of the GI tract is to process ingested food by mechanical and chemical means, extract nutrients and excrete waste products. The GI tract is composed of the alimentary canal, that runs from the mouth to the anus, as well as the associated glands, chemicals, hormones, and enzymes that assist in digestion. The major processes that occur in the GI tract are: motility, secretion, regulation, digestion and circulation. The proper function and coordination of these processes are vital for maintaining good health by providing for the effective digestion and uptake of nutrients.
Motility
The gastrointestinal tract generates motility using smooth muscle subunits linked by gap junctions. These subunits fire spontaneously in either a tonic or a phasic fashion. Tonic contractions are those contractions that are maintained from several minutes up to hours at a time. These occur in the sphincters of the tract, as well as in the anterior stomach. The other type of contractions, called phasic contractions, consist of brief periods of both relaxation and contraction, occurring in the posterior stomach and the small intestine, and are carried out by the muscularis externa.
Motility may be overactive (hypermotility), leading to diarrhea or vomiting, or underactive (hypomotility), leading to constipation or vomiting; either may cause abdominal pain.
Stimulation
The stimulation for these contractions likely originates in modified smooth muscle cells called interstitial cells of Cajal. These cells cause spontaneous cycles of slow wave potentials that can cause action potentials in smooth muscle cells. They are associated with the contractile smooth muscle via gap junctions. These slow wave potentials must reach a threshold level for the action potential to occur, whereupon Ca2+ channels on the smooth muscle open and an action potential
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does the majority of chemical digestion occur?
A. small intestine
B. large intestine
C. mouth
D. stomach
Answer:
|
|
sciq-11418
|
multiple_choice
|
What do you call the point at which the entire weight of a body may be considered to be concentrated?
|
[
"complex of gravity",
"center of gravity",
"center of earth",
"direction of gravity"
] |
B
|
Relavent Documents:
Document 0:::
In physics, a center of gravity of a material body is a point that may be used for a summary description of gravitational interactions. In a uniform gravitational field, the center of mass serves as the center of gravity. This is a very good approximation for smaller bodies near the surface of Earth, so there is no practical need to distinguish "center of gravity" from "center of mass" in most applications, such as engineering and medicine.
In a non-uniform field, gravitational effects such as potential energy, force, and torque can no longer be calculated using the center of mass alone. In particular, a non-uniform gravitational field can produce a torque on an object, even about an axis through the center of mass. The center of gravity seeks to explain this effect. Formally, a center of gravity is an application point of the resultant gravitational force on the body. Such a point may not exist, and if it exists, it is not unique. One can further define a unique center of gravity by approximating the field as either parallel or spherically symmetric.
The concept of a center of gravity as distinct from the center of mass is rarely used in applications, even in celestial mechanics, where non-uniform fields are important. Since the center of gravity depends on the external field, its motion is harder to determine than the motion of the center of mass. The common method to deal with gravitational torques is a field theory.
Center of mass
One way to define the center of gravity of a body is as the unique point in the body if it exists, that satisfies the following requirement: There is no torque about the point for any positioning of the body in the field of force in which it is placed. This center of gravity exists only when the force is uniform, in which case it coincides with the center of mass. This approach dates back to Archimedes.
Centers of gravity in a field
When a body is affected by a non-uniform external gravitational field, one can sometimes define a c
Document 1:::
The center of gravity (CG) of an aircraft is the point over which the aircraft would balance. Its position is calculated after supporting the aircraft on at least two sets of weighing scales or load cells and noting the weight shown on each set of scales or load cells. The center of gravity affects the stability of the aircraft. To ensure the aircraft is safe to fly, the center of gravity must fall within specified limits established by the aircraft manufacturer.
Terminology
Ballast Ballast is removable or permanently installed weight in an aircraft used to bring the center of gravity into the allowable range.
Center-of-Gravity Limits Center of gravity (CG) limits are specified longitudinal (forward and aft) and/or lateral (left and right) limits within which the aircraft's center of gravity must be located during flight. The CG limits are indicated in the airplane flight manual. The area between the limits is called the CG range of the aircraft.
Weight and BalanceWhen the weight of the aircraft is at or below the allowable limit(s) for its configuration (parked, ground movement, take-off, landing, etc.) and its center of gravity is within the allowable range, and both will remain so for the duration of the flight, the aircraft is said to be within weight and balance. Different maximum weights may be defined for different situations; for example, large aircraft may have maximum landing weights that are lower than maximum take-off weights (because some weight is expected to be lost as fuel is burned during the flight). The center of gravity may change over the duration of the flight as the aircraft's weight changes due to fuel burn or by passengers moving forward or aft in the cabin.
Reference DatumThe reference datum is a reference plane that allows accurate, and uniform, measurements to any point on the aircraft. The location of the reference datum is established by the manufacturer and is defined in the aircraft flight manual. The horizontal reference dat
Document 2:::
The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation).
It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm .
In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ).
The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value).
The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects.
Variation in magnitude
A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid.
Document 3:::
In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength).
In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass.
Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an
Document 4:::
In physics, the center of mass of a distribution of mass in space (sometimes referred to as the barycenter or balance point) is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. This is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion.
In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system.
The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass (see Barycenter (astronomy) for details). The center of mass frame is an inertial frame in which the center of mass of a system is at rest with respect to the origin of the coordinate system.
History
The concept of center of gravity or weight was studied extensively by the ancient Greek mathematician, physicist, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call the point at which the entire weight of a body may be considered to be concentrated?
A. complex of gravity
B. center of gravity
C. center of earth
D. direction of gravity
Answer:
|
|
sciq-5913
|
multiple_choice
|
Fish are a diverse and interesting group of organisms in what sub-phylum?
|
[
"invertebrates",
"organelles",
"mammals",
"vertebrates"
] |
D
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
Document 2:::
A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts.
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship.
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational
Document 3:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
Document 4:::
Polydactyly in stem-tetrapods should here be understood as having more than five digits to the finger or foot, a condition that was the natural state of affairs in the earliest stegocephalians during the evolution of terrestriality. The polydactyly in these largely aquatic animals is not to be confused with polydactyly in the medical sense, i.e. it was not an anomaly in the sense it was not a congenital condition of having more than the typical number of digits for a given taxon. Rather, it appears to be a result of the early evolution from a limb with a fin rather than digits.
"Living tetrapods, such as the frogs, turtles, birds and mammals, are a subgroup of the tetrapod lineage. The lineage also includes finned and limbed tetrapods that are more closely related to living tetrapods than to living lungfishes." Tetrapods evolved from animals with fins such as found in lobe-finned fishes. From this condition a new pattern of limb formation evolved, where the development axis of the limb rotated to sprout secondary axes along the lower margin, giving rise to a variable number of very stout skeletal supports for a paddle-like foot. The condition is thought to have arisen from the loss of the fin ray-forming proteins actinodin 1 and actinodin 2 or modification of the expression of HOXD13. It is still unknown why exactly this happens. "SHH is produced by the mesenchymal cells of the zone of polarizing activity (ZPA) found at the posterior margin of the limbs of all vertebrates with paired appendages, including the most primitive chondrichthyian fishes. Its expression is driven by a well-conserved limb-specific enhancer called the ZRS (zone of polarizing region activity regulatory sequence) that is located approximately 1 Mb upstream of the coding sequence of Shh."
Devonian taxa were polydactylous. Acanthostega had eight digits on both the hindlimbs and forelimbs. Ichthyostega, which was both more derived and more specialized, had seven digits on the hindlimb, though th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Fish are a diverse and interesting group of organisms in what sub-phylum?
A. invertebrates
B. organelles
C. mammals
D. vertebrates
Answer:
|
|
sciq-9613
|
multiple_choice
|
What is the sum of all biochemical reactions in cells called?
|
[
"metabolism",
"respiration",
"diffusion",
"evolution"
] |
A
|
Relavent Documents:
Document 0:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 1:::
In biology, the biological cost or metabolic price is a measure of the increased energy metabolism that is required to achieve a function. Drug resistance in microbiology, for instance, has a very high metabolic price, especially for antibiotic resistance.
Document 2:::
The following outline is provided as an overview of and topical guide to biophysics:
Biophysics – interdisciplinary science that uses the methods of physics to study biological systems.
Nature of biophysics
Biophysics is
An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.
A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force.
An interdisciplinary field – field of science that overlaps with other sciences
Scope of biophysics research
Biomolecular scale
Biomolecule
Biomolecular structure
Organismal scale
Animal locomotion
Biomechanics
Biomineralization
Motility
Environmental scale
Biophysical environment
Biophysics research overlaps with
Agrophysics
Biochemistry
Biophysical chemistry
Bioengineering
Biogeophysics
Nanotechnology
Systems biology
Branches of biophysics
Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general.
Medical biophysics – interdisciplinary field that applies me
Document 3:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 4:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the sum of all biochemical reactions in cells called?
A. metabolism
B. respiration
C. diffusion
D. evolution
Answer:
|
|
sciq-7658
|
multiple_choice
|
What are the two types of vascular tissue?
|
[
"xylem and phloem",
"cytoplasm and ectoplasm",
"phloem and phlegm",
"ectoderm and phloem"
] |
A
|
Relavent Documents:
Document 0:::
Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones.
Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific.
Characteristics
Botanists define vascular plants by three primary characteristics:
Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes.
In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with
Document 1:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 2:::
Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant.
The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well.
Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium g
Document 3:::
In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues.
Biological organisms follow this hierarchy:
Cells < Tissue < Organ < Organ System < Organism
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Plant tissue
In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue.
Epidermis – Cells forming the outer surface of the leaves and of the young plant body.
Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally.
Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients.
Plant tissues can also be divided differently into two types:
Meristematic tissues
Permanent tissues.
Meristematic tissue
Meristematic tissue consists of actively dividing cell
Document 4:::
The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. Endothelial cells form the barrier between vessels and tissue and control the flow of substances and fluid into and out of a tissue.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between v
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the two types of vascular tissue?
A. xylem and phloem
B. cytoplasm and ectoplasm
C. phloem and phlegm
D. ectoderm and phloem
Answer:
|
|
sciq-3523
|
multiple_choice
|
What do you call a group of organisms of the same species that live in the same area?
|
[
"system",
"ecosystem",
"population",
"biosphere"
] |
C
|
Relavent Documents:
Document 0:::
Ecological units, comprise concepts such as population, community, and ecosystem as the basic units, which are at the basis of ecological theory and research, as well as a focus point of many conservation strategies. The concept of ecological units continues to suffer from inconsistencies and confusion over its terminology. Analyses of the existing concepts used in describing ecological units have determined that they differ in respects to four major criteria:
The questions as to whether they are defined statistically or via a network of interactions,
If their boundaries are drawn by topographical or process-related criteria,
How high the required internal relationships are,
And if they are perceived as "real" entities or abstractions by an observer.
A population is considered to be the smallest ecological unit, consisting of a group of individuals that belong to the same species. A community would be the next classification, referring to all of the population present in an area at a specific time, followed by an ecosystem, referring to the community and it's interactions with its physical environment. An ecosystem is the most commonly used ecological unit and can be universally defined by two common traits:
The unit is often defined in terms of a natural border (maritime boundary, watersheds, etc.)
Abiotic components and organisms within the unit are considered to be interlinked.
See also
Biogeographic realm
Ecoregion
Ecotope
Holobiont
Functional ecology
Behavior settings
Regional geology
Document 1:::
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration.
The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics.
History
In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.
Terminology
A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature).
Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates
Document 2:::
This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
X
Y
Z
Related to this search
Index of biology articles
Outline of biology
Glossaries of sub-disciplines and related fields:
Glossary of botany
Glossary of ecology
Glossary of entomology
Glossary of environmental science
Glossary of genetics
Glossary of ichthyology
Glossary of ornithology
Glossary of scientific naming
Glossary of speciation
Glossary of virology
Document 3:::
This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology, Glossary of evolutionary biology, and Glossary of environmental science.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Outline of ecology
History of ecology
Document 4:::
Microbial population biology is the application of the principles of population biology to microorganisms.
Distinguishing from other biological disciplines
Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses.
Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems.
Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call a group of organisms of the same species that live in the same area?
A. system
B. ecosystem
C. population
D. biosphere
Answer:
|
|
sciq-7927
|
multiple_choice
|
What do you call the movement of a substance from an area of a higher amount toward an area of lower amount?
|
[
"precipitation",
"extraction",
"diffusion",
"filtration"
] |
C
|
Relavent Documents:
Document 0:::
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles:
Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid);
Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface);
Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex.
The reverse of sorption is desorption.
Sorption rate
The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion.
See also
Sorption isotherm
Document 1:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call the movement of a substance from an area of a higher amount toward an area of lower amount?
A. precipitation
B. extraction
C. diffusion
D. filtration
Answer:
|
|
sciq-6648
|
multiple_choice
|
What has a bigger impact on water quality, natural events or human activity?
|
[
"natural events",
"human activity",
"all of the above",
"water quality"
] |
B
|
Relavent Documents:
Document 0:::
Nutrient cycling in the Columbia River Basin involves the transport of nutrients through the system, as well as transformations from among dissolved, solid, and gaseous phases, depending on the element. The elements that constitute important nutrient cycles include macronutrients such as nitrogen (as ammonium, nitrite, and nitrate), silicate, phosphorus, and micronutrients, which are found in trace amounts, such as iron. Their cycling within a system is controlled by many biological, chemical, and physical processes.
The Columbia River Basin is the largest freshwater system of the Pacific Northwest, and due to its complexity, size, and modification by humans, nutrient cycling within the system is affected by many different components. Both natural and anthropogenic processes are involved in the cycling of nutrients. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams.
Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of n
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States.
Overview
Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15.
In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well.
UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station".
The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan
Document 3:::
Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself.
Lake ecosystem
Geography
The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments.
Eutrophication experiment
To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What has a bigger impact on water quality, natural events or human activity?
A. natural events
B. human activity
C. all of the above
D. water quality
Answer:
|
|
sciq-7270
|
multiple_choice
|
What are the extremely small particles that comprise all matter?
|
[
"electrons",
"atoms",
"protons",
"ions"
] |
B
|
Relavent Documents:
Document 0:::
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.
Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space.
For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea tha
Document 1:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 2:::
Minicharged particles (or milli-charged particles) are a proposed type of subatomic particle. They are charged, but with a tiny fraction of the charge of the electron. They weakly interact with matter. Minicharged particles are not part of the Standard Model. One proposal to detect them involved photons tunneling through an opaque barrier in the presence of a perpendicular magnetic field, the rationale being that a pair of oppositely charged minicharged particles are produced that curve in opposite directions, and recombine on the other side of the barrier reproducing the photon again.
Minicharged particles would result in vacuum magnetic dichroism, and would cause energy loss in microwave cavities. Photons from the cosmic microwave background would be dissipated by galactic-scale magnetic fields if minicharged particles existed, so this effect could be observable. In fact the dimming observed of remote supernovae that was used to support dark energy could also be explained by the formation of minicharged particles.
Tests of Coulomb's law can be applied to set bounds on minicharged particles.
Document 3:::
In non-technical terms, M-theory presents an idea about the basic substance of the universe. As of 2023, science has produced no experimental evidence to support the conclusion that M-theory is a description of the real world. Although a complete mathematical formulation of M-theory is not known, the general approach is the leading contender for a universal "Theory of Everything" that unifies gravity with other forces such as electromagnetism. M-theory aims to unify quantum mechanics with general relativity's gravitational force in a mathematically consistent way. In comparison, other theories such as loop quantum gravity are considered by physicists and researchers/students to be less elegant, because they posit gravity to be completely different from forces such as the electromagnetic force.
Background
In the early years of the 20th century, the atom – long believed to be the smallest building-block of matter – was proven to consist of even smaller components called protons, neutrons and electrons, which are known as subatomic particles. Other subatomic particles began being discovered in the 1960s. In the 1970s, it was discovered that protons and neutrons (and other hadrons) are themselves made up of smaller particles called quarks. The Standard Model is the set of rules that describes the interactions of these particles.
In the 1980s, a new mathematical model of theoretical physics, called string theory, emerged. It showed how all the different subatomic particles known to science could be constructed by hypothetical one-dimensional "strings", infinitesimal building-blocks that have only the dimension of length, but not height or width.
However, for string theory to be mathematically consistent, the strings must be in a universe of ten dimensions. This contradicts the experience that our real universe has four dimensions: three space dimensions (height, width, and length) and one time dimension. To "save" their theory, string theorists therefore added the exp
Document 4:::
In physics, a subatomic particle is a particle smaller than an atom. According to the Standard Model of particle physics, a subatomic particle can be either a composite particle, which is composed of other particles (for example, a baryon, like a proton or a neutron, composed of three quarks; or a meson, composed of two quarks), or an elementary particle, which is not composed of other particles (for example, quarks; or electrons, muons, and tau particles, which are called leptons). Particle physics and nuclear physics study these particles and how they interact. Most force carrying particles like photons or gluons are called bosons and, although they have discrete quanta of energy, do not have rest mass or discrete diameters (other than pure energy wavelength) and are unlike the former particles that have rest mass and cannot overlap or combine which are called fermions.
Experiments show that light could behave like a stream of particles (called photons) as well as exhibiting wave-like properties. This led to the concept of wave–particle duality to reflect that quantum-scale behave both like particles and like waves; they are sometimes called wavicles to reflect this.
Another concept, the uncertainty principle, states that some of their properties taken together, such as their simultaneous position and momentum, cannot be measured exactly. The wave–particle duality has been shown to apply not only to photons but to more massive particles as well.
Interactions of particles in the framework of quantum field theory are understood as creation and annihilation of quanta of corresponding fundamental interactions. This blends particle physics with field theory.
Even among particle physicists, the exact definition of a particle has diverse descriptions. These professional attempts at the definition of a particle include:
A particle is a collapsed wave function
A particle is a quantum excitation of a field
A particle is an irreducible representation of the Poinca
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the extremely small particles that comprise all matter?
A. electrons
B. atoms
C. protons
D. ions
Answer:
|
|
sciq-3280
|
multiple_choice
|
A lipid is one of a highly diverse group of compounds made up mostly of what?
|
[
"amines",
"proteins",
"nucleic acid",
"hydrocarbons"
] |
D
|
Relavent Documents:
Document 0:::
The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome.
The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer.
See also
Genome
Proteome
Glycome
Document 1:::
A simple lipid is a fatty acid ester of different alcohols and carries no other substance. These lipids belong to a heterogeneous class of predominantly nonpolar compounds, mostly insoluble in water, but soluble in nonpolar organic solvents such as chloroform and benzene.
Simple lipids: esters of fatty acids with various alcohols.
a. Fats: esters of fatty acids with glycerol. Oils are fats in the liquid state. Fats are also called triglycerides because all the three hydroxyl groups of glycerol are esterified.
b. Waxes: Solid esters of long-chain fatty acids such as palmitic acid with aliphatic or alicyclic higher molecular weight monohydric alcohols. Waxes are water-insoluble due to the weakly polar nature of the ester group.
See also
Lipid
Lipids
Document 2:::
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids.
By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol.
Applications
Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel.
See also
Lipids
Simple lipid
Document 3:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 4:::
Sphingolipids are a class of lipids containing a backbone of sphingoid bases, which are a set of aliphatic amino alcohols that includes sphingosine. They were discovered in brain extracts in the 1870s and were named after the mythological sphinx because of their enigmatic nature. These compounds play important roles in signal transduction and cell recognition. Sphingolipidoses, or disorders of sphingolipid metabolism, have particular impact on neural tissue. A sphingolipid with a terminal hydroxyl group is a ceramide. Other common groups bonded to the terminal oxygen atom include phosphocholine, yielding a sphingomyelin, and various sugar monomers or dimers, yielding cerebrosides and globosides, respectively. Cerebrosides and globosides are collectively known as glycosphingolipids.
Structure
The long-chain bases, sometimes simply known as sphingoid bases, are the first non-transient products of de novo sphingolipid synthesis in both yeast and mammals. These compounds, specifically known as phytosphingosine and dihydrosphingosine (also known as sphinganine, although this term is less common), are mainly C18 compounds, with somewhat lower levels of C20 bases. Ceramides and glycosphingolipids are N-acyl derivatives of these compounds.
The sphingosine backbone is O-linked to a (usually) charged head group such as ethanolamine, serine, or choline.
The backbone is also amide-linked to an acyl group, such as a fatty acid.
Types
Simple sphingolipids, which include the sphingoid bases and ceramides, make up the early products of the sphingolipid synthetic pathways.
Sphingoid bases are the fundamental building blocks of all sphingolipids. The main mammalian sphingoid bases are dihydrosphingosine and sphingosine, while dihydrosphingosine and phytosphingosine are the principal sphingoid bases in yeast. Sphingosine, dihydrosphingosine, and phytosphingosine may be phosphorylated.
Ceramides, as a general class, are N-acylated sphingoid bases lacking additional head groups.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A lipid is one of a highly diverse group of compounds made up mostly of what?
A. amines
B. proteins
C. nucleic acid
D. hydrocarbons
Answer:
|
|
ai2_arc-50
|
multiple_choice
|
Scientists who disagree with the results of an experiment should
|
[
"change the experiment.",
"keep their opinions to themselves.",
"find out what other scientists think about the results.",
"repeat the experiment several times and compare results."
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Scientists who disagree with the results of an experiment should
A. change the experiment.
B. keep their opinions to themselves.
C. find out what other scientists think about the results.
D. repeat the experiment several times and compare results.
Answer:
|
|
sciq-8412
|
multiple_choice
|
What do all respiratory diseases affect?
|
[
"transit exchange process",
"helium exchange process",
"liquid exchange process",
"gas exchange process"
] |
D
|
Relavent Documents:
Document 0:::
Mucociliary clearance (MCC), mucociliary transport, or the mucociliary escalator, describes the self-clearing mechanism of the airways in the respiratory system. It is one of the two protective processes for the lungs in removing inhaled particles including pathogens before they can reach the delicate tissue of the lungs. The other clearance mechanism is provided by the cough reflex. Mucociliary clearance has a major role in pulmonary hygiene.
MCC effectiveness relies on the correct properties of the airway surface liquid produced, both of the periciliary sol layer and the overlying mucus gel layer, and of the number and quality of the cilia present in the lining of the airways. An important factor is the rate of mucin secretion. The ion channels CFTR and ENaC work together to maintain the necessary hydration of the airway surface liquid.
Any disturbance in the closely regulated functioning of the cilia can cause a disease. Disturbances in the structural formation of the cilia can cause a number of ciliopathies, notably primary ciliary dyskinesia. Cigarette smoke exposure can cause shortening of the cilia.
Function
In the upper part of the respiratory tract the nasal hair in the nostrils traps large particles, and the sneeze reflex may also be triggered to expel them. The nasal mucosa also traps particles preventing their entry further into the tract. In the rest of the respiratory tract, particles of different sizes become deposited along different parts of the airways. Larger particles are trapped higher up in the larger bronchi. As the airways become narrower only smaller particles can pass. The branchings of the airways cause turbulence in the airflow at all of their junctions where particles can then be deposited and they never reach the alveoli. Only very small pathogens are able to gain entry to the alveoli. Mucociliary clearance functions to remove these particulates and also to trap and remove pathogens from the airways, in order to protect the delicate
Document 1:::
Pneumonia is an inflammatory condition of the lung primarily affecting the small air sacs known as alveoli. Symptoms typically include some combination of productive or dry cough, chest pain, fever, and difficulty breathing. The severity of the condition is variable.
Pneumonia is usually caused by infection with viruses or bacteria, and less commonly by other microorganisms. Identifying the responsible pathogen can be difficult. Diagnosis is often based on symptoms and physical examination. Chest X-rays, blood tests, and culture of the sputum may help confirm the diagnosis. The disease may be classified by where it was acquired, such as community- or hospital-acquired or healthcare-associated pneumonia.
Risk factors for pneumonia include cystic fibrosis, chronic obstructive pulmonary disease (COPD), sickle cell disease, asthma, diabetes, heart failure, a history of smoking, a poor ability to cough (such as following a stroke), and a weak immune system.
Vaccines to prevent certain types of pneumonia (such as those caused by Streptococcus pneumoniae bacteria, linked to influenza, or linked to COVID-19) are available. Other methods of prevention include hand washing to prevent infection, not smoking, and social distancing.
Treatment depends on the underlying cause. Pneumonia believed to be due to bacteria is treated with antibiotics. If the pneumonia is severe, the affected person is generally hospitalized. Oxygen therapy may be used if oxygen levels are low.
Each year, pneumonia affects about 450 million people globally (7% of the population) and results in about 4 million deaths. With the introduction of antibiotics and vaccines in the 20th century, survival has greatly improved. Nevertheless, pneumonia remains a leading cause of death in developing countries, and also among the very old, the very young, and the chronically ill. Pneumonia often shortens the period of suffering among those already close to death and has thus been called "the old man's friend".
S
Document 2:::
Lung receptors sense irritation or inflammation in the bronchi and alveoli.
Document 3:::
Pulmonary pathology is the subspecialty of surgical pathology which deals with the diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery (VATS). The diagnosis of inflammatory or fibrotic diseases of the lungs is considered by many pathologists to be particularly challenging.
Anatomical pathology
Document 4:::
Smoke inhalation is the breathing in of harmful fumes (produced as by-products of combusting substances) through the respiratory tract. This can cause smoke inhalation injury (subtype of acute inhalation injury) which is damage to the respiratory tract caused by chemical and/or heat exposure, as well as possible systemic toxicity after smoke inhalation. Smoke inhalation can occur from fires of various sources such as residential, vehicle, and wildfires. Morbidity and mortality rates in fire victims with burns are increased in those with smoke inhalation injury. Victims of smoke inhalation injury can present with cough, difficulty breathing, low oxygen saturation, smoke debris and/or burns on the face. Smoke inhalation injury can affect the upper respiratory tract (above the larynx), usually due to heat exposure, or the lower respiratory tract (below the larynx), usually due to exposure to toxic fumes. Initial treatment includes taking the victim away from the fire and smoke, giving 100% oxygen at a high flow through a face mask (non-rebreather if available), and checking the victim for injuries to the body. Treatment for smoke inhalation injury is largely supportive, with varying degrees of consensus on benefits of specific treatments.
Epidemiology
The U.S. Fire Administration reported almost 1.3 million fires in 2019 causing 3,704 deaths and almost 17,000 injuries. Residential fires were found to be most often cooking related and resulted in the highest amount of deaths when compared to other fire types such as vehicle and outdoor fires. It has been found that men have higher rates of fire-related death and injury than women do, and that African American and American Indian men have higher rates of fire-related death and injury than other ethnic and racial groups. The age group with the highest rate of death from smoke inhalation is people over 85, while the age group with the highest injury rate is people of ages 50–54. Some reports also show increased rates of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do all respiratory diseases affect?
A. transit exchange process
B. helium exchange process
C. liquid exchange process
D. gas exchange process
Answer:
|
|
sciq-6548
|
multiple_choice
|
Humans need lipids for many vital functions such as storing energy and forming what?
|
[
"enzymes",
"zygotes",
"ionic bonds",
"cell membranes"
] |
D
|
Relavent Documents:
Document 0:::
The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome.
The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer.
See also
Genome
Proteome
Glycome
Document 1:::
Fat globules (also known as mature lipid droplets) are individual pieces of intracellular fat in human cell biology. The lipid droplet's function is to store energy for the organism's body and is found in every type of adipocytes. They can consist of a vacuole, droplet of triglyceride, or any other blood lipid, as opposed to fat cells in between other cells in an organ. They contain a hydrophobic core and are encased in a phospholipid monolayer membrane. Due to their hydrophobic nature, lipids and lipid digestive derivatives must be transported in the globular form within the cell, blood, and tissue spaces.
The formation of a fat globule starts within the membrane bilayer of the endoplasmic reticulum. It starts as a bud and detaches from the ER membrane to join other droplets. After the droplets fuse, a mature droplet (full-fledged globule) is formed and can then partake in neutral lipid synthesis or lipolysis.
Globules of fat are emulsified in the duodenum into smaller droplets by bile salts during food digestion, speeding up the rate of digestion by the enzyme lipase at a later point in digestion. Bile salts possess detergent properties that allow them to emulsify fat globules into smaller emulsion droplets, and then into even smaller micelles. This increases the surface area for lipid-hydrolyzing enzymes to act on the fats.
Micelles are roughly 200 times smaller than fat emulsion droplets, allowing them to facilitate the transport of monoglycerides and fatty acids across the surface of the enterocyte, where absorption occurs.
Milk fat globules (MFGs) are another form of intracellular fat found in the mammary glands of female mammals. Their function is to provide enriching glycoproteins from the female to their offspring. They are formed in the endoplasmic reticulum found in the mammary epithelial lactating cell. The globules are made up of triacylglycerols encased in cellular membranes and proteins like adipophilin and TIP 47. The proteins are spread througho
Document 2:::
Fatty acid metabolism consists of various metabolic processes involving or closely related to fatty acids, a family of molecules classified within the lipid macronutrient category. These processes can mainly be divided into (1) catabolic processes that generate energy and (2) anabolic processes where they serve as building blocks for other compounds.
In catabolism, fatty acids are metabolized to produce energy, mainly in the form of adenosine triphosphate (ATP). When compared to other macronutrient classes (carbohydrates and protein), fatty acids yield the most ATP on an energy per gram basis, when they are completely oxidized to CO2 and water by beta oxidation and the citric acid cycle. Fatty acids (mainly in the form of triglycerides) are therefore the foremost storage form of fuel in most animals, and to a lesser extent in plants.
In anabolism, intact fatty acids are important precursors to triglycerides, phospholipids, second messengers, hormones and ketone bodies. For example, phospholipids form the phospholipid bilayers out of which all the membranes of the cell are constructed from fatty acids. Phospholipids comprise the plasma membrane and other membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus. In another type of anabolism, fatty acids are modified to form other compounds such as second messengers and local hormones. The prostaglandins made from arachidonic acid stored in the cell membrane are probably the best-known of these local hormones.
Fatty acid catabolism
Fatty acids are stored as triglycerides in the fat depots of adipose tissue. Between meals they are released as follows:
Lipolysis, the removal of the fatty acid chains from the glycerol to which they are bound in their storage form as triglycerides (or fats), is carried out by lipases. These lipases are activated by high epinephrine and glucagon levels in the blood (or norepinephrine secreted by s
Document 3:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 4:::
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids.
By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol.
Applications
Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel.
See also
Lipids
Simple lipid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Humans need lipids for many vital functions such as storing energy and forming what?
A. enzymes
B. zygotes
C. ionic bonds
D. cell membranes
Answer:
|
|
sciq-7861
|
multiple_choice
|
What term refers to an animal that sleeps during the day and is active at night?
|
[
"nocturnal",
"crepuscular",
"diurnal",
"solitary"
] |
A
|
Relavent Documents:
Document 0:::
Diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. The common adjective used for daytime activity is "diurnal". The timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. Animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral.
Plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths.
In animals
Many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. Commonly classified diurnal animals include mammals, birds, and reptiles. Most primates are diurnal, including humans. Scientifically classifying diurnality within animals can be a challenge, apart from the obvious increased activity levels during the day time light.
Evolution of diurnality
Initially, most animals were diurnal, but adaptations that allowed some animals to become nocturnal is what helped contribute to the success of many, especially mammals. This evolutionary movement to nocturnality allowed them to better avoid predators and gain resources with less competition from other animals. This did come with some adaptations that mammals live with today. Visi
Document 1:::
In zoology, a crepuscular animal is one that is active primarily during the twilight period, being matutinal, vespertine/vespertinal, or both. This is distinguished from diurnal and nocturnal behavior, where an animal is active during the hours of daylight and of darkness, respectively. Some crepuscular animals may also be active by moonlight or during an overcast day. Matutinal animals are active only before sunrise, and vespertine only after sunset.
A number of factors impact the time of day an animal is active. Predators hunt when their prey is available, and prey try to avoid the times when their principal predators are at large. The temperature may be too high at midday or too low at night. Some creatures may adjust their activities depending on local competition.
Etymology and usage
The word crepuscular derives from the Latin crepusculum ("twilight"). Its sense accordingly differs from diurnal and nocturnal behavior, which respectively peak during hours of daylight and darkness. The distinction is not absolute however, because crepuscular animals may also be active on a bright moonlit night or on a dull day. Some animals casually described as nocturnal are in fact crepuscular.
Special classes of crepuscular behaviour include matutinal (or "matinal", animals active only in the dawn) and vespertine (only in the dusk). Those active during both times are said to have a bimodal activity pattern.
Adaptive relevance
The various patterns of activity are thought to be mainly antipredator adaptations, though some could equally well be predatory adaptations. Many predators forage most intensively at night, whereas others are active at midday and see best in full sun. Thus, the crepuscular habit may both reduce predation pressure, thereby increasing the crepuscular populations, and in consequence offer better foraging opportunities to predators that increasingly focus their attention on crepuscular prey until a new balance is struck. Such shifting states of balance a
Document 2:::
Nocturnality is a behavior in some non-human animals characterized by being active during the night and sleeping during the day. The common adjective is "nocturnal", versus diurnal meaning the opposite.
Nocturnal creatures generally have highly developed senses of hearing, smell, and specially adapted eyesight. Some animals, such as cats and ferrets, have eyes that can adapt to both low-level and bright day levels of illumination (see metaturnal). Others, such as bushbabies and (some) bats, can function only at night. Many nocturnal creatures including tarsiers and some owls have large eyes in comparison with their body size to compensate for the lower light levels at night. More specifically, they have been found to have a larger cornea relative to their eye size than diurnal creatures to increase their : in the low-light conditions. Nocturnality helps wasps, such as Apoica flavissima, avoid hunting in intense sunlight.
Diurnal animals, including humans (except for night owls), squirrels and songbirds, are active during the daytime. Crepuscular species, such as rabbits, skunks, tigers and hyenas, are often erroneously referred to as nocturnal. Cathemeral species, such as fossas and lions, are active both in the day and at night.
Origins
While it is difficult to say which came first, nocturnality or diurnality, a hypothesis in evolutionary biology, the nocturnal bottleneck theory, postulates that in the Mesozoic, many ancestors of modern-day mammals evolved nocturnal characteristics in order to avoid contact with the numerous diurnal predators. A recent study attempts to answer the question as to why so many modern day mammals retain these nocturnal characteristics even though they are not active at night. The leading answer is that the high visual acuity that comes with diurnal characteristics is not needed anymore due to the evolution of compensatory sensory systems, such as a heightened sense of smell and more astute auditory systems. In a recent study, rece
Document 3:::
Matutinal, matinal (in entomological writings), and matutine are terms used in the life sciences to indicate something of, relating to, or occurring in the early morning. The term may describe crepuscular animals that are significantly active during the predawn or early morning hours. During the morning twilight period and shortly thereafter, these animals partake in important tasks, such as scanning for mates, mating, and foraging.
Matutinal behaviour is thought to be adaptive because there may be less competition between species, and sometimes even a higher prevalence of food during these hours. It may also serve as an anti-predator adaptation by allowing animals to sit between the brink of danger that may come with diurnal and nocturnal activity.
Etymology
The word matutinal is derived from the Latin word , meaning "of or pertaining to the morning", from Mātūta, the Roman goddess of the morning or dawn (+ -īnus '-ine' + -ālis '-al').
Adaptive relevance
Selection pressures, such as high predatory activity or low food may require animals to change their behaviours to adapt. An animal changing the time of day at which it carries out significant tasks (e.g., mating and/or foraging) is recognized as one of these adaptive behaviours. For example, human activity, which is more predominant during daylight hours, has forced certain species (most often larger mammals) living in urban areas to shift their schedules to crepuscular ones. When observed in environments where there is little or no human activity, these same species often do not exhibit this temporal shift. It may be argued that if the goal is to avoid human activity, or any other diurnal predator's activity, a nocturnal schedule would be safer. However, many of these animals depend on sight, so a matutinal or crepuscular schedule is especially advantageous as it allows animals to both avoid predation, and have sufficient light to mate and forage.
Matutinal mating
For certain species, commencing mating
Document 4:::
Cathemerality, sometimes called "metaturnality", is an organismal activity pattern of irregular intervals during the day or night in which food is acquired, socializing with other organisms occurs, and any other activities necessary for livelihood are undertaken. This activity differs from the generally monophasic pattern (sleeping once per day) of nocturnal and diurnal species as it is polyphasic (sleeping 4-6 times per day) and is approximately evenly distributed throughout the 24-hour cycle.
Many animals do not fit the traditional definitions of being strictly nocturnal, diurnal, or crepuscular, often driven by factors that include the availability of food, predation pressure, and variable ambient temperature. Although cathemerality is not as widely observed in individual species as diurnality or nocturnality, this activity pattern is seen across the mammal taxa, such as in lions, coyotes, and lemurs.
Cathemeral behaviour can also vary on a seasonal basis over an annual period by exhibiting periods of predominantly nocturnal behaviour and exhibiting periods of predominantly diurnal behaviour. For example, seasonal cathemerality has been described for the mongoose lemur (Eulemur mongoz) as activity that shifts from being predominantly diurnal to being predominantly nocturnal over a yearly cycle, but the common brown lemurs (Eulemur fulvus) have been observed as seasonally shifting from diurnal activity to cathemerality.
As research on cathemerality continues, many factors that have been identified as influencing whether or why an animal behaves cathemerally. Such factors include resource variation, food quality, photoperiodism, nocturnal luminosity, temperature, predator avoidance, and energetic constraints.
Etymology
In the original manuscript for his article "Patterns of activity in the Mayotte lemur, Lemur fulvus mayottensis," Ian Tattersall introduced the term cathemerality to describe a pattern of observed activity that was neither diurnal nor nocturn
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term refers to an animal that sleeps during the day and is active at night?
A. nocturnal
B. crepuscular
C. diurnal
D. solitary
Answer:
|
|
sciq-11667
|
multiple_choice
|
The interaction of what opposite factors describe a biome and ecosystem?
|
[
"abiotic and biotic",
"hygroscopic and abiotic",
"innate and biotic",
"metastasis and biotic"
] |
A
|
Relavent Documents:
Document 0:::
A biome () is a biogeographical unit consisting of a biological community that has formed in response to the physical environment in which they are found and a shared regional climate. Biomes may span more than one continent. Biome is a broader term than habitat and can comprise a variety of habitats.
While a biome can cover small areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body.
A biota is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere.
Etymology
The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome.
However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil).
In Brazilian literature, the term "biome" is sometimes used as synonym of biogeographic pr
Document 1:::
In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate.
The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo
Document 2:::
This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology, Glossary of evolutionary biology, and Glossary of environmental science.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Outline of ecology
History of ecology
Document 3:::
Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment.
Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity.
Impact
Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result.
Examples
Some examples of ecosystems that are rich in diversity are:
Deserts
Forests
Large marine ecosystems
Marine ecosystems
Old-growth forests
Rainforests
Tundra
Coral reefs
Marine
Ecosystem diversity as a result of evolutionary pressure
Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form
Document 4:::
A biophysical environment is a biotic and abiotic surrounding of an organism or population, and consequently includes the factors that have an influence in their survival, development, and evolution. A biophysical environment can vary in scale from microscopic to global in extent. It can also be subdivided according to its attributes. Examples include the marine environment, the atmospheric environment and the terrestrial environment. The number of biophysical environments is countless, given that each living organism has its own environment.
The term environment can refer to a singular global environment in relation to humanity, or a local biophysical environment, e.g. the UK's Environment Agency.
Life-environment interaction
All life that has survived must have adapted to the conditions of its environment. Temperature, light, humidity, soil nutrients, etc., all influence the species within an environment. However, life in turn modifies, in various forms, its conditions. Some long-term modifications along the history of the planet have been significant, such as the incorporation of oxygen to the atmosphere. This process consisted of the breakdown of carbon dioxide by anaerobic microorganisms that used the carbon in their metabolism and released the oxygen to the atmosphere. This led to the existence of oxygen-based plant and animal life, the great oxygenation event.
Related studies
Environmental science is the study of the interactions within the biophysical environment. Part of this scientific discipline is the investigation of the effect of human activity on the environment.
Ecology, a sub-discipline of biology and a part of environmental sciences, is often mistaken as a study of human-induced effects on the environment.
Environmental studies is a broader academic discipline that is the systematic study of the interaction of humans with their environment. It is a broad field of study that includes:
The natural environment
Built environments
Social envi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The interaction of what opposite factors describe a biome and ecosystem?
A. abiotic and biotic
B. hygroscopic and abiotic
C. innate and biotic
D. metastasis and biotic
Answer:
|
|
sciq-5970
|
multiple_choice
|
What flows like taffy or hot wax?
|
[
"water",
"sand dunes",
"gas",
"molten rock"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Rheometry () generically refers to the experimental techniques used to determine the rheological properties of materials, that is the qualitative and quantitative relationships between stresses and strains and their derivatives. The techniques used are experimental. Rheometry investigates materials in relatively simple flows like steady shear flow, small amplitude oscillatory shear, and extensional flow.
The choice of the adequate experimental technique depends on the rheological property which has to be determined. This can be the steady shear viscosity, the linear viscoelastic properties (complex viscosity respectively elastic modulus), the elongational properties, etc.
For all real materials, the measured property will be a function of the flow conditions during which it is being measured (shear rate, frequency, etc.) even if for some materials this dependence is vanishingly low under given conditions (see Newtonian fluids).
Rheometry is a specific concern for smart fluids such as electrorheological fluids and magnetorheological fluids, as it is the primary method to quantify the useful properties of these materials.
Rheometry is considered useful in the fields of quality control, process control, and industrial process modelling, among others. For some, the techniques, particularly the qualitative rheological trends, can yield the classification of materials based on the main interactions between different possible elementary components and how they qualitatively affect the rheological behavior of the materials. Novel applications of these concepts include measuring cell mechanics in thin layers, especially in drug screening contexts.
Of non-Newtonian fluids
The viscosity of a non-Newtonian fluid is defined by a power law:
where η is the viscosity after shear is applied, η0 is the initial viscosity, γ is the shear rate, and if
, the fluid is shear thinning,
, the fluid is shear thickening,
, the fluid is Newtonian.
In rheometry, shear forces are applied t
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow, which occurs when a fluid flows in parallel layers, with no disruption between those layers.
Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This increases the energy needed to pump fluid through a pipe.
The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Richard Feynman described turbulence as the most important unsolved problem in classical physics.
The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change.
Examples of turbulence
Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale.
Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient s
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What flows like taffy or hot wax?
A. water
B. sand dunes
C. gas
D. molten rock
Answer:
|
|
ai2_arc-268
|
multiple_choice
|
A carpenter covered a piece of wood with a thin sheet of paper. He struck the covered piece of wood with a hammer. The impact left a small hole in the paper that smelled of smoke. Which kind of energy transfer did this event most likely demonstrate?
|
[
"chemical to thermal",
"mechanical to thermal",
"mechanical to chemical",
"chemical to mechanical"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 2:::
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat
Document 3:::
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t
Document 4:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A carpenter covered a piece of wood with a thin sheet of paper. He struck the covered piece of wood with a hammer. The impact left a small hole in the paper that smelled of smoke. Which kind of energy transfer did this event most likely demonstrate?
A. chemical to thermal
B. mechanical to thermal
C. mechanical to chemical
D. chemical to mechanical
Answer:
|
|
ai2_arc-669
|
multiple_choice
|
Solar cells absorb energy from the sun. In order to use this energy to power household appliances, solar cells must convert the absorbed energy to
|
[
"heat.",
"light.",
"radiation.",
"electricity."
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
For description and history, see Solar cell
A solar cell (also called photovoltaic cell or photoelectric cell) is a solid state electrical device that converts the energy of light directly into electricity by the photovoltaic effect, which is a physical and chemical phenomenon. It is a form of photoelectric cell, defined as a device whose electrical characteristics, such as current, voltage or resistance, vary when exposed to light.
The following are the different types of solar cells.
Amorphous Silicon solar cell (a-Si)
Biohybrid solar cell
Cadmium telluride solar cell (CdTe)
Concentrated PV cell (CVP and HCVP)
Copper indium gallium selenide solar cells (CI(G)S)
Crystalline silicon solar cell (c-Si)
Float-zone silicon
Dye-sensitized solar cell (DSSC)
Gallium arsenide germanium solar cell (GaAs)
Hybrid solar cell
Luminescent solar concentrator cell (LSC)
Micromorph (tandem-cell using a-Si/μc-Si)
Monocrystalline solar cell (mono-Si)
Multi-junction solar cell (MJ)
Nanocrystal solar cell
Organic solar cell (OPV)
Perovskite solar cell
Photoelectrochemical cell (PEC)
Plasmonic solar cell
Polycrystalline solar cell (multi-Si)
Quantum dot solar cell
Solid-state solar cell
Thin-film solar cell (TFSC)
Wafer solar cell, or wafer-based solar cell crystalline
Non concentrated hetrogeneos PV cell
Solar cells
Silicon solar cells
Thin-film cells
Infrared solar cells
Silicon forms
Semiconductor materials
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
A solar cell or photovoltaic cell (PV cell) is an electronic device that converts the energy of light directly into electricity by means of the photovoltaic effect. It is a form of photoelectric cell, a device whose electrical characteristics (such as current, voltage, or resistance) vary when exposed to light. Individual solar cell devices are often the electrical building blocks of photovoltaic modules, known colloquially as "solar panels". The common single-junction silicon solar cell can produce a maximum open-circuit voltage of approximately 0.5 to 0.6 volts.
Photovoltaic cells may operate under sunlight or artificial light. In addition to producing energy, they can be used as a photodetector (for example infrared detectors), detecting light or other electromagnetic radiation near the visible range, or measuring light intensity.
The operation of a PV cell requires three basic attributes:
The absorption of light, generating excitons (bound electron-hole pairs), unbound electron-hole pairs (via excitons), or plasmons.
The separation of charge carriers of opposite types.
The separate extraction of those carriers to an external circuit.
In contrast, a solar thermal collector supplies heat by absorbing sunlight, for the purpose of either direct heating or indirect electrical power generation from heat. A "photoelectrolytic cell" (photoelectrochemical cell), on the other hand, refers either to a type of photovoltaic cell (like that developed by Edmond Becquerel and modern dye-sensitized solar cells), or to a device that splits water directly into hydrogen and oxygen using only solar illumination.
Photovoltaic cells and solar collectors are the two means of producing solar power.
Applications
Assemblies of solar cells are used to make solar modules that generate electrical power from sunlight, as distinguished from a "solar thermal module" or "solar hot water panel". A solar array generates solar power using solar energy.
Vehicular applications
Application
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Solar cells absorb energy from the sun. In order to use this energy to power household appliances, solar cells must convert the absorbed energy to
A. heat.
B. light.
C. radiation.
D. electricity.
Answer:
|
|
sciq-11377
|
multiple_choice
|
The process of moving from areas of high amounts to areas of low amounts is called what?
|
[
"deposition",
"transfer",
"filtration",
"diffusion"
] |
D
|
Relavent Documents:
Document 0:::
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles:
Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid);
Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface);
Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex.
The reverse of sorption is desorption.
Sorption rate
The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion.
See also
Sorption isotherm
Document 1:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 2:::
Colloid-facilitated transport designates a transport process by which colloidal particles serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(limestone, sandstone, granite, ...). The transport of colloidal particles in surface soils and in the ground can also occur, depending on the soil structure, soil compaction, and the particles size, but the importance of colloidal transport was only given sufficient attention during the 1980 years.
Radionuclides, heavy metals, and organic pollutants, easily sorb onto colloids suspended in water and that can easily act as contaminant carrier.
Various types of colloids are recognised: inorganic colloids (clay particles, silicates, iron oxy-hydroxides, ...), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "Eigencolloid" is used to designate pure phases, e.g., Tc(OH)4, Th(OH)4, U(OH)4, Am(OH)3. Colloids have been suspected for the long range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
See also
Colloid
Dispersion
DLVO theory (from Derjaguin, Landau, Verwey and Overbeek)
Double layer (electrode)
Double layer (interfacial)
Double layer forces
Gouy-Chapman model
Eigencolloid
Electrical double layer (EDL)
Flocculation
Hydrosol
Interface
Interface and colloid science
Nanoparticle
Peptization (the inverse of flocculation)
Sol (colloid)
Sol-gel
Streaming potential
Suspension
Zeta potential
Document 3:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 4:::
In chemistry, deposition occurs when molecules settle out of a solution.
Deposition can be viewed as a reverse process to dissolution or particle re-entrainment.
See also
Atomic layer deposition
Chemical vapor deposition
Deposition (physics)
Fouling
Physical vapor deposition
Thin-film deposition
Fused filament fabrication
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The process of moving from areas of high amounts to areas of low amounts is called what?
A. deposition
B. transfer
C. filtration
D. diffusion
Answer:
|
|
sciq-7722
|
multiple_choice
|
What "kind" of water may take longer to become contaminated than surface water, while the natural cleaning process may take longer?
|
[
"dam water",
"spring water",
"lake water",
"groundwater"
] |
D
|
Relavent Documents:
Document 0:::
Groundwater remediation is the process that is used to treat polluted groundwater by removing the pollutants or converting them into harmless products. Groundwater is water present below the ground surface that saturates the pore space in the subsurface. Globally, between 25 per cent and 40 per cent of the world's drinking water is drawn from boreholes and dug wells. Groundwater is also used by farmers to irrigate crops and by industries to produce everyday goods. Most groundwater is clean, but groundwater can become polluted, or contaminated as a result of human activities or as a result of natural conditions.
The many and diverse activities of humans produce innumerable waste materials and by-products. Historically, the disposal of such waste have not been subject to many regulatory controls. Consequently, waste materials have often been disposed of or stored on land surfaces where they percolate into the underlying groundwater. As a result, the contaminated groundwater is unsuitable for use.
Current practices can still impact groundwater, such as the over application of fertilizer or pesticides, spills from industrial operations, infiltration from urban runoff, and leaking from landfills. Using contaminated groundwater causes hazards to public health through poisoning or the spread of disease, and the practice of groundwater remediation has been developed to address these issues. Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Pollutants and contaminants can be removed from groundwater by applying various techniques, thereby bringing the water to a standard that is commensurate with various intended uses.
Techniques
Ground water remediation techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation,
Document 1:::
Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization, reverse osmosis, carbon filtering, microfiltration, ultrafiltration, ultraviolet oxidation, or electrodeionization. Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt).
Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals.
Parameters of water purity
Purified water is usually produced by the purification of drinking water or ground water. The impurities that may need to be removed are:
inorganic ions (typically monitored as electrical conductivity or resistivity or specific tests)
organic compounds (typically monitored as TOC or by specific tests)
bacteria (monitored by total viable counts or epifluorescence)
endotoxins and nucleases (monitored by LAL or specific enzyme tests)
particulates (typically controlled by filtration)
gases (typically managed by degassing when required)
Purification methods
Distillation
Distilled water is produced by a process of distillation. Distillation involves boiling the water and then condensing the vapor into a clean container, leaving sol
Document 2:::
A boil-water advisory (BWA), boil-water notice, boil-water warning, boil-water order, or boil order is a public-health advisory or directive issued by governmental or other health authorities to consumers when a community's drinking water is or could be contaminated by pathogens.
Under a BWA, the Centers for Disease Control and Prevention recommends that water be brought to a rolling boil for one minute before it is consumed in order to kill protozoa, bacteria, and viruses. At altitudes above , boiling should be extended to 3 minutes, as the lower boiling point at high altitudes requires more time to kill such organisms. A boil water advisory usually lasts up to 24-48 hours, but sometimes more.
BWA's are typically issued when monitoring of water being served to consumers detects E. coli or other microbiological indicators of sewage contamination. Another reason for a BWA is a failure of distribution system integrity evidenced by a loss of system pressure. While loss of pressure does not necessarily mean the water has been contaminated, it does mean that pathogens may be able to enter the piped-water system and thus be carried to consumers. In the United States, this has been defined as a drop below .
History
John Snow's 1849 recommendation that water be "filtered and boiled before it is used" is one of the first practical applications of the germ theory of disease in the area of public health and is the antecedent to the modern boil water advisory. Snow demonstrated a clear understanding of germ theory in his writings. He first published his theory in an 1849 essay On the Mode of Communication of Cholera, in which he correctly suggested that the fecal-oral route was the mode of communication, and that the disease replicated itself in the lower intestines. Snow later went so far as to accurately propose in his 1855 edition of the work that the structure of cholera was that of a cell. Snow's ideas were not fully accepted until years after his death in 1858.
The
Document 3:::
Ultrapure water (UPW), high-purity water or highly purified water (HPW) is water that has been purified to uncommonly stringent specifications. Ultrapure water is a term commonly used in manufacturing to emphasize the fact that the water is treated to the highest levels of purity for all contaminant types, including: organic and inorganic compounds; dissolved and particulate matter; volatile and non-volatile; reactive, and inert; hydrophilic and hydrophobic; and dissolved gases.
UPW and the commonly used term deionized (DI) water are not the same. In addition to the fact that UPW has organic particles and dissolved gases removed, a typical UPW system has three stages: a pretreatment stage to produce purified water, a primary stage to further purify the water, and a polishing stage, the most expensive part of the treatment process.
A number of organizations and groups develop and publish standards associated with the production of UPW. For microelectronics and power, they include Semiconductor Equipment and Materials International (SEMI) (microelectronics and photovoltaic), American Society for Testing and Materials International (ASTM International) (semiconductor, power), Electric Power Research Institute (EPRI) (power), American Society of Mechanical Engineers (ASME) (power), and International Association for the Properties of Water and Steam (IAPWS) (power). Pharmaceutical plants follow water quality standards as developed by pharmacopeias, of which three examples are the United States Pharmacopeia, European Pharmacopeia, and Japanese Pharmacopeia.
The most widely used requirements for UPW quality are documented by ASTM D5127 "Standard Guide for Ultra-Pure Water Used in the Electronics and Semiconductor Industries" and SEMI F63 "Guide for ultrapure water used in semiconductor processing".
Ultra pure water is also used as boiler feedwater in the UK AGR fleet.
Sources and control
Bacteria, particles, organic, and inorganic sources of contamination vary depend
Document 4:::
Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing.
The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric.
All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering.
Water
Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What "kind" of water may take longer to become contaminated than surface water, while the natural cleaning process may take longer?
A. dam water
B. spring water
C. lake water
D. groundwater
Answer:
|
|
sciq-9624
|
multiple_choice
|
The chloroplast integrates the two stages of what process?
|
[
"breathing",
"polarization",
"defacation",
"photosynthesis"
] |
D
|
Relavent Documents:
Document 0:::
In contrast to the Cladophorales where nuclei are organized in regularly spaced cytoplasmic domains, the cytoplasm of Bryopsidales exhibits streaming, enabling transportation of organelles, transcripts and nutrients across the plant.
The Sphaeropleales also contain several common freshwat
Document 1:::
Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle.
How photosynthesis systems function
Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of induces more atmospheric to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of and water vapor that is measured as a proxy of photosynthetic rate.
The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the and water vapour by passage over soda lime and Drierite, then add at a controlled rate to give a stable concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured.
The leaf to be analysed is placed in the leaf chamber. The concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the concentration. Modern IRGAs take account of the fact that absorbs energy at similar wavelengths as . Modern IRG
Document 2:::
{{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete
Document 3:::
Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym.
Plants
Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis.
In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions.
Light
Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum
Document 4:::
Plant Physiology is a monthly peer-reviewed scientific journal that covers research on physiology, biochemistry, cellular and molecular biology, genetics, biophysics, and environmental biology of plants. The journal has been published since 1926 by the American Society of Plant Biologists. The current editor-in-chief is Yunde Zhao (University of California San Diego. According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.005.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The chloroplast integrates the two stages of what process?
A. breathing
B. polarization
C. defacation
D. photosynthesis
Answer:
|
|
sciq-6307
|
multiple_choice
|
The larger surface area of leaves allows them to capture more what?
|
[
"chlorophyll",
"sunlight",
"molecules",
"pollen"
] |
B
|
Relavent Documents:
Document 0:::
A leaf (: leaves) is a principal appendage of the stem of a vascular plant, usually borne laterally aboveground and specialized for photosynthesis. Leaves are collectively called foliage, as in "autumn foliage", while the leaves, stem, flower, and fruit collectively form the shoot system. In most leaves, the primary photosynthetic tissue is the palisade mesophyll and is located on the upper side of the blade or lamina of the leaf but in some species, including the mature foliage of Eucalyptus, palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll which is essential for photosynthesis as it absorbs light energy from the sun. A leaf with lighter-colored or white patches or edges is called a variegated leaf.
Leaves can have many different shapes, sizes, textures and colors. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them, the majority, as broad-leaved or megaphyllous plants, which also include acrogymnosperms and ferns. In the lycopods, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls. Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems
Document 1:::
Photosynthetic capacity (Amax) is a measure of the maximum rate at which leaves are able to fix carbon during photosynthesis. It is typically measured as the amount of carbon dioxide that is fixed per metre squared per second, for example as μmol m−2 sec−1.
Limitations
Photosynthetic capacity is limited by carboxylation capacity and electron transport capacity. For example, in high carbon dioxide concentrations or in low light, the plant is not able to regenerate ribulose-1,5-bisphosphate fast enough (also known RUBP, the acceptor molecule in photosynthetic carbon reduction). So in this case, photosynthetic capacity is limited by electron transport of the light reaction, which generates the NADPH and ATP required for the PCR (Calvin) Cycle, and regeneration of RUBP. On the other hand, in low carbon dioxide concentrations, the capacity of the plant to perform carboxylation (adding carbon dioxide to Rubisco) is limited by the amount of available carbon dioxide, with plenty of Rubisco left over.¹ Light response, or photosynthesis-irradiance, curves display these relationships.
Current Research
Recent studies have shown that photosynthetic capacity in leaves can be increased with an increase in the number of stomata per leaf. This could be important in further crop development engineering to increase the photosynthetic efficiency through increasing diffusion of carbon dioxide into the plant.²
Document 2:::
Specific leaf area (SLA) is the ratio of leaf area to leaf dry mass. The inverse of SLA is Leaf Mass per Area (LMA).
Rationale
Specific leaf area is a ratio indicating how much leaf area a plant builds with a given amount of leaf biomass:
where A is the area of a given leaf or all leaves of a plant, and ML is the dry mass of those leaves. Typical units are m2kg−1 or mm2mg−1.
Leaf mass per area (LMA) is its inverse and can mathematically be decomposed in two component variables, leaf thickness (LTh) and leaf density (LD):
Typical units are g.m−2 for LMA, µm for LTh and g.ml−1 for LD.
Both SLA and LMA are frequently used in plant ecology and biology. SLA is one of the components in plant growth analysis, and mathematically scales positively and linearly with the relative growth rate of a plant. LMA mathematically scales positively with the investments plants make per unit leaf area (amount of protein and cell wall; cell number per area) and with leaf longevity. Since linear, positive relationships are more easily analysed than inverse negative relationships, researchers often use either variable, depending on the type of questions asked.
Normal Ranges
Normal ranges of SLA and LMA are species-dependent and influenced by growth environment. Table 1 gives normal ranges (~10th and ~90th percentiles) for species growing in the field, for well-illuminated leaves. Aquatic plants generally have very low LMA values, with particularly low numbers reported for species such as Myriophyllum farwelli (2.8 g.m−2) and Potamogeton perfoliatus (3.9 g. m−2). Evergreen shrubs and Gymnosperm trees as well as succulents have particularly high LMA values, with highest values reported for Aloe saponaria (2010 g.m−2) and Agave deserti (2900 g.m−2).
Application
Specific leaf area can be used to estimate the reproductive strategy of a particular plant based upon light and moisture (humidity) levels, among other factors. Specific leaf area is one of the most widely accepted key leaf chara
Document 3:::
Biomass partitioning is the process by which plants divide their energy among their leaves, stems, roots, and reproductive parts. These four main components of the plant have important morphological roles: leaves take in CO2 and energy from the sun to create carbon compounds, stems grow above competitors to reach sunlight, roots absorb water and mineral nutrients from the soil while anchoring the plant, and reproductive parts facilitate the continuation of species. Plants partition biomass in response to limits or excesses in resources like sunlight, carbon dioxide, mineral nutrients, and water and growth is regulated by a constant balance between the partitioning of biomass between plant parts. An equilibrium between root and shoot growth occurs because roots need carbon compounds from photosynthesis in the shoot and shoots need nitrogen absorbed from the soil by roots. Allocation of biomass is put towards the limit to growth; a limit below ground will focus biomass to the roots and a limit above ground will favor more growth in the shoot.
Plants photosynthesize to create carbon compounds for growth and energy storage. Sugars created through photosynthesis are then transported by phloem using the pressure flow system and are used for growth or stored for later use. Biomass partitioning causes this sugar to be divided in a way that maximizes growth, provides the most fitness, and allows for successful reproduction. Plant hormones play a large part in biomass partitioning since they affect differentiation and growth of cells and tissues by changing the expression of genes and altering morphology. By responding to environmental stimuli and partitioning biomass accordingly, plants are better able to take in resources from their environmental and maximize growth.
Abiotic Factors of Partitioning
It is important for plants to be able to balance their absorption and utilization of available resources and they adjust their growth in order to acquire more of the scarce, g
Document 4:::
The leaf angle distribution (or LAD) of a plant canopy refers to the mathematical description of the angular orientation of the leaves in the vegetation. Specifically, if each leaf is conceptually represented by a small flat plate, its orientation can be described with the zenith and the azimuth angles of the surface normal to that plate. If the leaf has a complex structure and is not flat, it may be necessary to approximate the actual leaf by a set of small plates, in which case there may be a number of leaf normals and associated angles. The LAD describes the statistical distribution of these angles.
Examples of leaf angle distributions
Different plant canopies exhibit different LADs: For instance, grasses and willows have their leaves largely hanging vertically (such plants are said to have an erectophile LAD), while oaks tend to maintain their leaves more or less horizontally (these species are known as having a planophile LAD). In some tree species, leaves near the top of the canopy follow an erectophile LAD while those at the bottom of the canopy are more planophile. This may be interpreted as a strategy by that plant species to maximize exposure to light, an important constraint to growth and development. Yet other species (notably sunflower) are capable of reorienting their leaves throughout the day to optimize exposure to the Sun: this is known as heliotropism.
Importance of LAD
The LAD of a plant canopy has a significant impact on the reflectance, transmittance and absorption of solar light in the vegetation layer, and thus also on its growth and development. LAD can also serve as a quantitative index to monitor the state of the plants, as wilting usually results in more erectophile LADs. Models of radiation transfer need to take this distribution into account to predict, for instance, the albedo or the productivity of the canopy.
Measuring LAD
Accurately measuring the statistical properties of leaf angle distributions is not a trivial matter, especi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The larger surface area of leaves allows them to capture more what?
A. chlorophyll
B. sunlight
C. molecules
D. pollen
Answer:
|
|
sciq-7498
|
multiple_choice
|
What causes the red color of laterite soils?
|
[
"iron oxides",
"toxins",
"oxygen",
"erosion"
] |
A
|
Relavent Documents:
Document 0:::
Molybdenite is a mineral of molybdenum disulfide, MoS2. Similar in appearance and feel to graphite, molybdenite has a lubricating effect that is a consequence of its layered structure. The atomic structure consists of a sheet of molybdenum atoms sandwiched between sheets of sulfur atoms. The Mo-S bonds are strong, but the interaction between the sulfur atoms at the top and bottom of separate sandwich-like tri-layers is weak, resulting in easy slippage as well as cleavage planes.
Molybdenite crystallizes in the hexagonal crystal system as the common polytype 2H and also in the trigonal system as the 3R polytype.
Description
Occurrence
Molybdenite occurs in high temperature hydrothermal ore deposits.
Its associated minerals include pyrite, chalcopyrite, quartz, anhydrite, fluorite, and scheelite. Important deposits include the disseminated porphyry molybdenum deposits at Questa, New Mexico and the Henderson and Climax mines in Colorado. Molybdenite also occurs in porphyry copper deposits of Arizona, Utah, and Mexico.
The element rhenium is always present in molybdenite as a substitute for molybdenum, usually in the parts per million (ppm ) range, but often up to 1–2%. High rhenium content results in a structural variety detectable by X-ray diffraction techniques. Molybdenite ores are essentially the only source for rhenium. The presence of the radioactive isotope rhenium-187 and its daughter isotope osmium-187 provides a useful geochronologic dating technique.
Features
Molybdenite is extremely soft with a metallic luster, and is superficially almost identical to graphite, to the point where it is not possible to positively distinguish between the two minerals without scientific equipment. It marks paper in much the same way as graphite. Its distinguishing feature from graphite is its higher specific gravity, as well as its tendency to occur in a matrix.
Uses
Molybdenite is an important ore of molybdenum, and is the most common source of the metal. While
Document 1:::
Ferrallitisation is the process in which rock is changed into a soil consisting of clay (kaolinite) and sesquioxides, in the form of hydrated oxides of iron and aluminium. In humid tropical areas, with consistently high temperatures and rainfall for all or most of the year, chemical weathering rapidly breaks down the rock. This at first produces clays which later also break down to form silica. The silica is removed by leaching and the sesquioxides of iron and aluminium remain, giving the characteristic red colour of many tropical soils. Ferrallitisation is the reverse of podsolisation, where silica remains and the iron and aluminum are removed. In tropical rain forests with rain throughout the year, ferrallitic soils develop. In savanna areas, with altering dry and wet climates, ferruginous soils occur.
Further reading
Biogeochemical cycle
Land management
Natural resources
Soil science
Document 2:::
There are seven soil deposits in India. They are alluvial soil, black soil, red soil, laterite soil, or arid soil, and forest and mountainous soil,marsh soil. These soils are formed by the sediments brought down by the rivers. They also have varied chemical properties. Sundarbans mangrove swamps are rich in marsh soil.
Major soil deposits
Document 3:::
Desert varnish or rock varnish is an orange-yellow to black coating found on exposed rock surfaces in arid environments. Desert varnish is approximately one micrometer thick and exhibits nanometer-scale layering. Rock rust and desert patina are other terms which are also used for the condition, but less often.
Formation
Desert varnish forms only on physically stable rock surfaces that are no longer subject to frequent precipitation, fracturing or wind abrasion. The varnish is primarily composed of particles of clay along with oxides of iron and manganese. There is also a host of trace elements and almost always some organic matter. The color of the varnish varies from shades of brown to black.
It has been suggested that desert varnish should be investigated as a potential candidate for a "shadow biosphere". However, a 2008 microscopy study posited that desert varnish has already been reproduced with chemistry not involving life in the lab, and that the main component is actually silica and not clay as previously thought. The study notes that desert varnish is an excellent fossilizer for microbes and indicator of water. Desert varnish appears to have been observed by rovers on Mars, and if examined may contain fossilized life from Mars's wet period.
Composition
Originally scientists thought that the varnish was made from substances drawn out of the rocks it coats. Microscopic and microchemical observations, however, show that a major part of varnish is clay, which could only arrive by wind. Clay, then, acts as a substrate to catch additional substances that chemically react together when the rock reaches high temperatures in the desert sun. Wetting by dew is also important in the process.
An important characteristic of black desert varnish is that it has an unusually high concentration of manganese. Manganese is relatively rare in the Earth's crust, making up only 0.12% of its weight. In black desert varnish, however, manganese is 50 to 60 times more abundan
Document 4:::
Features, Events, and Processes (FEP) are terms used in the fields of radioactive waste management, carbon capture and storage, and hydraulic fracturing to define relevant scenarios for safety assessment studies. For a radioactive waste repository, features would include the characteristics of the site, such as the type of soil or geological formation the repository is to be built on or under. Events would include things that may or will occur in the future, like, e.g., glaciations, droughts, earthquakes, or formation of faults. Processes are things that are ongoing, such as the erosion or subsidence of the landform where the site is located on, or near.
Several catalogues of FEP's are publicly available, a.o., this one elaborated for the NEA Clay Club dealing with the disposal of radioactive waste in deep clay formations,
and those compiled for deep crystalline rocks (granite) by Svensk Kärnbränslehantering AB, SKB, the Swedish Nuclear Fuel and Waste Management Company.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What causes the red color of laterite soils?
A. iron oxides
B. toxins
C. oxygen
D. erosion
Answer:
|
|
sciq-4302
|
multiple_choice
|
Animal-like protists are called what?
|
[
"genus",
"larvae",
"bacteria",
"protozoa"
] |
D
|
Relavent Documents:
Document 0:::
Marine protists are defined by their habitat as protists that live in marine environments, that is, in the saltwater of seas or oceans or the brackish water of coastal estuaries. Life originated as marine single-celled prokaryotes (bacteria and archaea) and later evolved into more complex eukaryotes. Eukaryotes are the more developed life forms known as plants, animals, fungi and protists. Protists are the eukaryotes that cannot be classified as plants, fungi or animals. They are mostly single-celled and microscopic. The term protist came into use historically as a term of convenience for eukaryotes that cannot be strictly classified as plants, animals or fungi. They are not a part of modern cladistics because they are paraphyletic (lacking a common ancestor for all descendants).
Most protists are too small to be seen with the naked eye. They are highly diverse organisms currently organised into 18 phyla, but not easy to classify. Studies have shown high protist diversity exists in oceans, deep sea-vents and river sediments, suggesting large numbers of eukaryotic microbial communities have yet to be discovered. There has been little research on mixotrophic protists, but recent studies in marine environments found mixotrophic protists contribute a significant part of the protist biomass. Since protists are eukaryotes (and not prokaryotes) they possess within their cell at least one nucleus, as well as organelles such as mitochondria and Golgi bodies. Many protist species can switch between asexual reproduction and sexual reproduction involving meiosis and fertilization.
In contrast to the cells of prokaryotes, the cells of eukaryotes are highly organised. Plants, animals and fungi are usually multi-celled and are typically macroscopic. Most protists are single-celled and microscopic. But there are exceptions. Some single-celled marine protists are macroscopic. Some marine slime molds have unique life cycles that involve switching between unicellular, colonial, and
Document 1:::
A protist ( ) or protoctist is any eukaryotic organism that is not an animal, plant, or fungus. Protists do not form a natural group, or clade, but an artificial grouping of several independent clades that evolved from the last eukaryotic common ancestor.
Protists were historically regarded as a separate taxonomic kingdom known as Protista or Protoctista. With the advent of phylogenetic analysis and electron microscopy studies, the use of Protista as a formal taxon was gradually abandoned. In modern classifications, protists are spread across several eukaryotic clades called supergroups, such as Archaeplastida (which includes plants), SAR, Obazoa (which includes fungi and animals), Amoebozoa and Excavata.
Protists represent an extremely large genetic and ecological diversity in all environments, including extreme habitats. Their diversity, larger than for all other eukaryotes, has only been discovered in recent decades through the study of environmental DNA, and is still in the process of being fully described. They are present in all ecosystems as important components of the biogeochemical cycles and trophic webs. They exist abundantly and ubiquitously in a variety of forms that evolved multiple times independently, such as free-living algae, amoebae and slime moulds, or as important parasites. Together, they compose an amount of biomass that doubles that of animals. They exhibit varied types of nutrition (such as phototrophy, phagotrophy or osmotrophy), sometimes combining them (in mixotrophy). They present unique adaptations not present in multicellular animals, fungi or land plants. The study of protists is termed protistology.
Definition
There is not a single accepted definition of what protists are. As a paraphyletic assemblage of diverse biological groups, they have historically been regarded as a catch-all taxon that includes any eukaryotic organism (i.e., living beings whose cells possess a nucleus) that is not an animal, a land plant or a dikaryon fung
Document 2:::
Endoparasites
Protozoan organisms
Helminths (worms)
Helminth organisms (also called helminths or intestinal worms) include:
Tapeworms
Flukes
Roundworms
Other organisms
Ectoparasites
Document 3:::
A protist is any eukaryotic organism (that is, an organism whose cells contain a cell nucleus) that is not an animal, plant, or fungus. While it is likely that protists share a common ancestor, the last eukaryotic common ancestor, the exclusion of other eukaryotes means that protists do not form a natural group, or clade. Therefore, some protists may be more closely related to animals, plants, or fungi than they are to other protists. However, like algae, invertebrates and protozoans, the grouping is used for convenience.
Many protists have neither hard parts nor resistant spores, and their fossils are extremely rare or unknown. Examples of such groups include the apicomplexans, most ciliates, some green algae (the Klebsormidiales), choanoflagellates, oomycetes, brown algae, yellow-green algae, Excavata (e.g., euglenids). Some of these have been found preserved in amber (fossilized tree resin) or under unusual conditions (e.g., Paleoleishmania, a kinetoplastid).
Others are relatively common in the fossil record, as the diatoms, golden algae, haptophytes (coccoliths), silicoflagellates, tintinnids (ciliates), dinoflagellates, green algae, red algae, heliozoans, radiolarians, foraminiferans, ebriids and testate amoebae (euglyphids, arcellaceans). Some are used as paleoecological indicators to reconstruct ancient environments.
More probable eukaryote fossils begin to appear at about 1.8 billion years ago, the acritarchs, spherical fossils of likely algal protists. Another possible representative of early fossil eukaryotes are the Gabonionta.
Modern classifications
Systematists today do not treat Protista as a formal taxon, but the term "protist" is still commonly used for convenience in two ways. The most popular contemporary definition is a phylogenetic one, that identifies a paraphyletic group: a protist is any eukaryote that is not an animal, (land) plant, or (true) fungus; this definition excludes many unicellular groups, like the Microsporidia (fungi), many C
Document 4:::
Anti-protist or antiprotistal refers to an anti-parasitic and anti-infective agent which is active against protists. Unfortunately due to the long ingrained usage of the term antiprotozoal, the two terms are confused, when in fact protists are a supercategory. Therefore, there are protists that are not protozoans. Beyond "animal-like" (heterotrophic, including parasitic) protozoans, protists also include the "plant-like" (autotrophic) protophyta and the "fungi-like" saprophytic molds. In current biology, the concept of a "protist" and its three subdivisions has been replaced.
See also
Amebicide
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Animal-like protists are called what?
A. genus
B. larvae
C. bacteria
D. protozoa
Answer:
|
|
scienceQA-5524
|
multiple_choice
|
Select the animal.
|
[
"Octopuses eat animals that live underwater.",
"Apple trees can grow fruit.",
"Cypress trees have green leaves.",
"Hydrangea bushes can grow colorful flowers."
] |
A
|
An octopus is an animal. It eats animals that live underwater.
An octopus has two eyes and eight arms.
An apple tree is a plant. It can grow fruit.
People have been growing apples for thousands of years. There are more than 7,500 types of apples!
A cypress tree is a plant. It has green leaves.
The leaves of cypress trees are called needles.
A hydrangea bush is a plant. It can grow colorful flowers.
Hydrangea bushes can have blue, white, purple, or pink flowers.
|
Relavent Documents:
Document 0:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 1:::
History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC.
Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal.
The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers.
The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject.
Context
Aristotle (384–322 BC) studied at Plat
Document 2:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 3:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 4:::
The Encyclopedia of Life (EOL) is a free, online encyclopedia intended to document all of the 1.9 million living species known to science. It is compiled from existing trusted databases curated by experts and with the assistance of non-experts throughout the world. It aims to build one "infinitely expandable" page for each species, including video, sound, images, graphics, as well as text. In addition, the Encyclopedia incorporates content from the Biodiversity Heritage Library, which digitizes millions of pages of printed literature from the world's major natural history libraries. The project was initially backed by a US$50 million funding commitment, led by the MacArthur Foundation and the Sloan Foundation, who provided US$20 million and US$5 million, respectively. The additional US$25 million came from five cornerstone institutions—the Field Museum, Harvard University, the Marine Biological Laboratory, the Missouri Botanical Garden, and the Smithsonian Institution. The project was initially led by Jim Edwards and the development team by David Patterson. Today, participating institutions and individual donors continue to support EOL through financial contributions.
Overview
EOL went live on 26 February 2008 with 30,000 entries. The site immediately proved to be extremely popular, and temporarily had to revert to demonstration pages for two days when over 11 million views of it were requested.
The site relaunched on 5 September 2011 with a redesigned interface and tools. The new version – referred to as EOLv2 – was developed in response to requests from the general public, citizen scientists, educators and professional biologists for a site that was more engaging, accessible and personal. EOLv2 is redesigned to enhance usability and encourage contributions and interactions among users. It is also internationalized with interfaces provided for English, German, Spanish, French, Galician, Serbian, Macedonian, Arabic, Chinese, Korean and Ukrainian language speakers
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the animal.
A. Octopuses eat animals that live underwater.
B. Apple trees can grow fruit.
C. Cypress trees have green leaves.
D. Hydrangea bushes can grow colorful flowers.
Answer:
|
sciq-7101
|
multiple_choice
|
Zinc is what kind of metal.?
|
[
"passive stagnant metal",
"active move metal",
"active transition metal",
"active flow metal"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Zinc is what kind of metal.?
A. passive stagnant metal
B. active move metal
C. active transition metal
D. active flow metal
Answer:
|
|
sciq-11472
|
multiple_choice
|
What is the radula made mostly of?
|
[
"chlorophyll",
"chitin",
"casein",
"schist"
] |
B
|
Relavent Documents:
Document 0:::
C
Cadaverine
Caffeine
Calciferol (Vitamin D)
Calcitonin
Calmodulin
Calreticulin
Camphor - (C10H16O)
Cannabinol - (C21H26O2)
Capsaicin
Carbohydrase
Carbohydrate
Carnitine
Carrageenan
Carotinoid
Casein
Caspase
Catecholamine
Cellulase
Cellulose - (C6H10O5)x
Cerulenin
Cetrimonium bromide (Cetrimide) - C19H42BrN
Chelerythrine
Chromomycin A3
Chaparonin
Chitin
α-Chloralose
Chlorophyll
Cholecystokinin (CCK)
Cholesterol
Choline
Chondroitin sulfate
Cinnamaldehyde
Citral
Citric acid
Citrinin
Citronellal
Citronellol
Citrulline
Cobalamin (vitamin B12)
Coenzyme
Coenzyme Q
Colchicine
Collagen
Coniine
Corticosteroid
Corti
Document 1:::
A druse is a group of crystals of calcium oxalate, silicates, or carbonates present in plants, and are thought to be a defense against herbivory due to their toxicity. Calcium oxalate (Ca(COO)2, CaOx) crystals are found in algae, angiosperms and gymnosperms in a total of more than 215 families. These plants accumulate oxalate in the range of 3–80% (w/w) of their dry weight through a biomineralization process in a variety of shapes. Araceae have numerous druses, multi-crystal druses and needle-shaped raphide crystals of CaOx present in the tissue. Druses are also found in leaves and bud scales of Prunus, Rosa, Allium, Vitis, Morus and Phaseolus.
Formation
A number of biochemical pathways for calcium oxalate biomineralization in plants have been proposed. Among these are the cleavage of isocitrate, the hydrolysis of oxaloacetate, glycolate/glyoxylate oxidation, and/or oxidative cleavage of L-ascorbic acid. The cleavage of ascorbic acid appears to be the most studied pathway. The specific mechanism controlling this process is unclear but it has been suggested that a number of factors influence crystal shape and growth, such as proteins, polysaccharides, and lipids or macromolecular membrane structures. Druses may also have some purpose in calcium regulation.
See also
Idioblast
Raphide
Phytolith
Plant defense against herbivory
Document 2:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 3:::
The Dundee Society was a society of graduates of CA-400, a National Security Agency course in cryptology devised by Lambros D. Callimahos, which included the Zendian Problem (a practical exercise in traffic analysis and cryptanalysis). The class was held once a year, and new members were inducted into the Society upon completion of the class. The Society was founded in the mid-1950s and continued on after Callimahos' retirement from NSA in 1976. The last CA-400 class was held at NSA in 1979, formally closing the society's membership rolls.
The society took its name from an empty jar of Dundee Marmalade that Callimahos kept on his desk for use as a pencil caddy. Callimahos came up with the society's name while trying to schedule a luncheon for former CA-400 students at the Ft. Meade Officers' Club; being unable to use either the course name or the underlying government agency's name for security reasons, he spotted the ceramic Dundee jar and decided to use "The Dundee Society" as the cover name for the luncheon reservation. CA-400 students were presented with ceramic Dundee Marmalade jars at the close of the course as part of the induction ceremony into the Dundee Society. When Dundee switched from ceramic to glass jars, Callimahos would still present graduates with ceramic Dundee jars, but the jars were then collected back up for use in next year's induction ceremony, and members were "encouraged" to seek out Dundee jars for their own collections if they wished to have a permanent token of induction.
See also
American Cryptogram Association
National Cryptologic School
Document 4:::
The RMIT (Royal Melbourne Institute of Technology) School of Science is an Australian tertiary education school within the College of Science Engineering and Health of RMIT University. It was created in 2016 from the former schools of Applied Sciences, Computer Science and Information Technology, and Mathematical and Geospatial Sciences.
See also
RMIT University
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the radula made mostly of?
A. chlorophyll
B. chitin
C. casein
D. schist
Answer:
|
|
ai2_arc-436
|
multiple_choice
|
Which invention would a culture living above the Arctic Circle most likely develop?
|
[
"ice production",
"air conditioning",
"insulated clothing",
"irrigation canals"
] |
C
|
Relavent Documents:
Document 0:::
The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools.
ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing
Document 1:::
The Institut de technologie agroalimentaire (ITA) is a collegial institute specialized in agricultural technology and food production in Quebec, Canada. The institution is composed of two campuses, one in Saint-Hyacinthe and the other in La Pocatière. The institution is managed by the Ministère de l'Agriculture, des Pêcheries et de l'Alimentation du Québec (MAPAQ).
History
The origins of the ITA date back to the 19th century. The first francophone school of agriculture was founded in 1859 in Sainte-Anne-de-la-Pocatière, while the dairy school in Saint-Hyacinthe was created in 1892, the first such institution in North America.
In 1962, the Ministry of Agriculture, Fisheries and Food of Quebec (known today in French as the Ministère de l'Agriculture, des Pêcheries et de l'Alimentation, and in 1962 as the Ministère de l'Agriculture et de la Colonisation) formed the Instituts de technologie agroalimentaire. While the La Pocatière campus was an extension of the Faculty of Agronomy of Université Laval, the Saint-Hyacinthe campus was originally a dairy school founded in 1892.
Training programs
The ITA offers a total of eight CEGEP-level training programs, which lead to a Quebec Diploma of College Studies. Most programs are offered at both campuses. They include:
Gestion et technologies d'entreprise agricole
Gestion et technologies d'entreprise agricole : Profils en production animale biologique
Technologie des productions animales
Paysage et commercialisation en horticulture ornementale
Technologie de la production horticole agroenvironnementale
Technologie du génie agromécanique
Technologie des procédés et de la qualité des aliments
Techniques équines
The ITA's programs listed above allow graduates to pursue university-level studies in related fields such as agronomy, agricultural economics, agricultural engineering, food engineering, biology, food science, and landscape architecture, amongst others.
The ITA also offers one training program in equine massage therapy,
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
The Oklahoma Energy Resources Board (abbreviated OERB) is an agency of the state of Oklahoma. Funded voluntarily by Oklahoma's oil and natural gas producers and royalty owners, the OERB conducts environmental restoration of orphaned and abandoned well sites, encourages the wise and efficient use of energy, and promotes energy education.
Unique is the OERB's funding process – though it is funded by a 0.1% assessment on oil and gas sales (not uncommon among similar agencies), it is a voluntary assessment. Any producer or royalty owner may opt out of the program by requesting OERB (between January 1 and March 31 of each year) for a refund of previously paid assessments. The OERB states that over 95% of participants remain in the program.
The Board is composed of 21 members. 7 members are appointed by the Governor of Oklahoma, 7 are appointed by the President pro tempore of the Oklahoma Senate, and 7 appointed by the Speaker of the Oklahoma House of Representatives. All members are either independent oil or natural gas producers or representatives of major oil companies that do business in Oklahoma. The Board, in turn, appoints an Executive Director to serve as the chief administrative officer of the Board.
The current board chairman is David Le Norman, Managing Partner & Founder of Reign Capital Holdings LLC.
OERB was created by the Oklahoma Legislature and energy industry leaders in 1993 during the term of Governor of Oklahoma David Walters.
Mission
The stated missions of the Oklahoma Energy Resources Board are:
to educate Oklahomans about the importance of petroleum (oil and natural gas) in their lives through traditional and non-traditional school curricula, advertising, and public relations
to environmentally restore abandoned well sites to productive land use
to promote environmentally sound production methods and technologies
to research and provide educational activities concerning the petroleum exploration and production industry
Leadership
OERB is unde
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which invention would a culture living above the Arctic Circle most likely develop?
A. ice production
B. air conditioning
C. insulated clothing
D. irrigation canals
Answer:
|
|
sciq-11483
|
multiple_choice
|
Lactic acid fermentation is common in muscle cells that have run out of what?
|
[
"nitrogen",
"oxygen",
"helium",
"carbon"
] |
B
|
Relavent Documents:
Document 0:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 1:::
In biochemistry, fermentation theory refers to the historical study of models of natural fermentation processes, especially alcoholic and lactic acid fermentation. Notable contributors to the theory include Justus Von Liebig and Louis Pasteur, the latter of whom developed a purely microbial basis for the fermentation process based on his experiments. Pasteur's work on fermentation later led to his development of the germ theory of disease, which put the concept of spontaneous generation to rest. Although the fermentation process had been used extensively throughout history prior to the origin of Pasteur's prevailing theories, the underlying biological and chemical processes were not fully understood. In the contemporary, fermentation is used in the production of various alcoholic beverages, foodstuffs, and medications.
Overview of fermentation
Fermentation is the anaerobic metabolic process that converts sugar into acids, gases, or alcohols in oxygen starved environments. Yeast and many other microbes commonly use fermentation to carry out anaerobic respiration necessary for survival. Even the human body carries out fermentation processes from time to time, such as during long-distance running; lactic acid will build up in muscles over the course of long-term exertion. Within the human body, lactic acid is the by-product of ATP-producing fermentation, which produces energy so the body can continue to exercise in situations where oxygen intake cannot be processed fast enough. Although fermentation yields less ATP than aerobic respiration, it can occur at a much higher rate. Fermentation has been used by humans consciously since around 5000 BCE, evidenced by jars recovered in the Iran Zagros Mountains area containing remnants of microbes similar those present in the wine-making process.
History
Prior to Pasteur's research on fermentation, there existed some preliminary competing notions of it. One scientist who had a substantial degree of influence on the theory o
Document 2:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 3:::
Lactate inflection point (LIP) is the exercise intensity at which the blood concentration of lactate and/or lactic acid begins to increase rapidly. It is often expressed as 85% of maximum heart rate or 75% of maximum oxygen intake. When exercising at or below the lactate threshold, any lactate produced by the muscles is removed by the body without it building up.
The onset of blood lactate accumulation (OBLA) is often confused with the lactate threshold. With an exercise intensity higher than the threshold the lactate production exceeds the rate at which it can be broken down. The blood lactate concentration will show an increase equal to 4.0 mM; it then accumulates in the muscle and then moves to the bloodstream.
Regular endurance exercise leads to adaptations in skeletal muscle which raises the threshold at which lactate levels will rise. This is mediated via activation of the protein receptor PGC-1α, which alters the isoenzyme composition of the lactate dehydrogenase (LDH) complex and decreases the activity of lactate dehydrogenase A (LDHA), while increasing the activity of lactate dehydrogenase B (LDHB).
Training types
The lactate threshold is a useful measure for deciding exercise intensity for training and racing in endurance sports (e.g., long distance running, cycling, rowing, long distance swimming and cross country skiing), but varies between individuals and can be increased with training.
Interval training
Interval training alternates work and rest periods allowing the body to temporarily exceed the lactate threshold at a high intensity, and then recover (reduce blood-lactate). This type of training uses the ATP-PC and the lactic acid system while exercising, which provides the most energy when there are short bursts of high intensity exercise followed by a recovery period. Interval training can take the form of many different types of exercise and should closely replicate the movements found in the sport being trained for. Interval training can be
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Lactic acid fermentation is common in muscle cells that have run out of what?
A. nitrogen
B. oxygen
C. helium
D. carbon
Answer:
|
|
sciq-6427
|
multiple_choice
|
Liquids that mix with water in all proportions are usually polar substances or substances that form these?
|
[
"hydrogen bonds",
"atmospheric bonds",
"silicon bonds",
"compressed bonds"
] |
A
|
Relavent Documents:
Document 0:::
Relative viscosity () (a synonym of "viscosity ratio") is the ratio of the viscosity of a solution () to the viscosity of the solvent used (),
.
The significance in Relative viscosity is that it can be analyzed the effect a polymer can have on a solution's viscosity such as increasing the solutions viscosity.
Lead Liquids possess an amount of internal friction that presents itself when stirred in the form of resistance. This resistance is the different layers of the liquid reacting to one another as they are stirred. This can be seen in things like syrup, which has a higher viscosity than water and exhibits less internal friction when stirred. The ratio of this viscosity is known as Relative Viscosity.
Document 1:::
A hydrophile is a molecule or other molecular entity that is attracted to water molecules and tends to be dissolved by water.
In contrast, hydrophobes are not attracted to water and may seem to be repelled by it. Hygroscopics are attracted to water, but are not dissolved by water.
Molecules
A hydrophilic molecule or portion of a molecule is one whose interactions with water and other polar substances are more thermodynamically favorable than their interactions with oil or other hydrophobic solvents. They are typically charge-polarized and capable of hydrogen bonding. This makes these molecules soluble not only in water but also in other polar solvents.
Hydrophilic molecules (and portions of molecules) can be contrasted with hydrophobic molecules (and portions of molecules). In some cases, both hydrophilic and hydrophobic properties occur in a single molecule. An example of these amphiphilic molecules is the lipids that comprise the cell membrane. Another example is soap, which has a hydrophilic head and a hydrophobic tail, allowing it to dissolve in both water and oil.
Hydrophilic and hydrophobic molecules are also known as polar molecules and nonpolar molecules, respectively. Some hydrophilic substances do not dissolve. This type of mixture is called a colloid.
An approximate rule of thumb for hydrophilicity of organic compounds is that solubility of a molecule in water is more than 1 mass % if there is at least one neutral hydrophile group per 5 carbons, or at least one electrically charged hydrophile group per 7 carbons.
Hydrophilic substances (ex: salts) can seem to attract water out of the air. Sugar is also hydrophilic, and like salt is sometimes used to draw water out of foods. Sugar sprinkled on cut fruit will "draw out the water" through hydrophilia, making the fruit mushy and wet, as in a common strawberry compote recipe.
Chemicals
Liquid hydrophilic chemicals complexed with solid chemicals can be used to optimize solubility of hydrophobic chemical
Document 2:::
Physics and Chemistry of Liquids is a peer-reviewed scientific journal that publishes experimental and theoretical research articles focused on the science of the liquid state.
The editors-in-chief are N. H. March and G. G. N. Angilella. According to the Journal Citation Reports, the journal has a 2011 impact factor of 0.603.
Scope
The journal's scope includes all types of liquids, from monatomic liquids and their mixtures, through charged liquids to molecular liquids.
Abstracting and indexing
Physics and Chemistry of Liquids is abstracted and indexed in the following databases:
GEOBASE
Chemical Abstracts Service - CASSI
PubMed - MEDLINE
Science Citation Index - Web of Science
Document 3:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 4:::
Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane.
Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others.
There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology.
See also
Interface (matter)
Electrokinetic phenomena
Surface science
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Liquids that mix with water in all proportions are usually polar substances or substances that form these?
A. hydrogen bonds
B. atmospheric bonds
C. silicon bonds
D. compressed bonds
Answer:
|
|
scienceQA-4027
|
multiple_choice
|
What do these two changes have in common?
rust forming on a metal gate
plants making food from sunlight, air, and water
|
[
"Both are caused by cooling.",
"Both are caused by heating.",
"Both are chemical changes.",
"Both are only physical changes."
] |
C
|
Step 1: Think about each change.
Rust forming on a metal gate is a chemical change. As the gate rusts, the metal turns into a different type of matter called rust. Rust is reddish-brown and falls apart easily.
Plants making food is a chemical change. Plants use energy from sunlight to change air and water into food. The food is sugar. Sugar is a different type of matter than air or water.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
rust forming on a metal gate
plants making food from sunlight, air, and water
A. Both are caused by cooling.
B. Both are caused by heating.
C. Both are chemical changes.
D. Both are only physical changes.
Answer:
|
sciq-523
|
multiple_choice
|
What is the greatest contribution of arthropods to human food supply?
|
[
"reproduction",
"pollination",
"hibernation",
"vegetation"
] |
B
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 2:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 3:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 4:::
Douglas Joel Futuyma (born 24 April 1942) is an American evolutionary biologist. He is a Distinguished Professor in the Department of Ecology and Evolution at Stony Brook University in Stony Brook, New York and a Research Associate on staff at the American Museum of Natural History in New York City. His research focuses on speciation and population biology. Futuyma is the author of a widely used undergraduate textbook on evolution and is also known for his work in public outreach, particularly in advocating against creationism.
Education
Futuyma graduated with a B.S. from Cornell University. He received his M.S. in 1966 and his Ph.D. in zoology in 1969, both from the University of Michigan, Ann Arbor.
Academic career
Futuyma began his career in the Department of Ecology and Evolution at Stony Brook University in 1969 and was appointed Distinguished Professor in 2001. He served as the chair of the Department of Ecology and Evolutionary Biology at University of Michigan, Ann Arbor from 2002-2003 and as the Lawrence B. Slobodkin Collegiate Professor in that department from 2003-2004 before returning to Stony Brook in 2004.
Futuyma served as the president of the Society for the Study of Evolution in 1987, of the American Society of Naturalists in 1994, and of the American Institute of Biological Sciences in 2008. He has served as the editor of the scientific journals Evolution and Annual Review of Ecology, Evolution, and Systematics.
Research
Futuyma's research examines speciation and population biology, particularly the evolutionary interactions between herbivorous insects and their plant hosts and the implications for evolution of host specificity.
Teaching and outreach
Futuyma is well known for his success in teaching and public outreach. He is the author of several textbooks, most notably the very widely used authoritative text Evolutionary Biology (in its third edition, published 1998) and a simplified version targeted explicitly to undergraduates, Evolution (
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the greatest contribution of arthropods to human food supply?
A. reproduction
B. pollination
C. hibernation
D. vegetation
Answer:
|
|
sciq-7188
|
multiple_choice
|
What measurement of the wave is used to determine the magnitude of an earthquake?
|
[
"diameter",
"width",
"height",
"length"
] |
C
|
Relavent Documents:
Document 0:::
The moment magnitude scale (MMS; denoted explicitly with or Mw, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. It was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often says "Richter scale" when referring to the moment magnitude scale.
Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturatethat is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the U.S. Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment.
History
Richter scale: the original measure of earthquake magnitude
At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnit
Document 1:::
Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment is defined by the equation
, where
is the shear modulus of the rocks involved in the earthquake (in pascals (Pa), i.e. newtons per square meter)
is the area of the rupture along the geologic fault where the earthquake occurred (in square meters), and
is the average slip (displacement offset between the two sides of the fault) on (in meters).
thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative.
The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip.
Seismic moment is the basis of the moment magnitude scale introduced by Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes.
The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is
See also
Richter magnitude scale
Moment magnitude scale
Sources
.
.
.
.
Seismology measurement
Moment (physics)
Document 2:::
The Richter scale (), also called the Richter magnitude scale, Richter's magnitude scale, and the Gutenberg–Richter scale, is a measure of the strength of earthquakes, developed by Charles Francis Richter and presented in his landmark 1935 paper, where he called it the "magnitude scale". This was later revised and renamed the local magnitude scale, denoted as ML or .
Because of various shortcomings of the original scale, most seismological authorities now use other similar scales such as the moment magnitude scale () to report earthquake magnitudes, but much of the news media still erroneously refers to these as "Richter" magnitudes. All magnitude scales retain the logarithmic character of the original and are scaled to have roughly comparable numeric values (typically in the middle of the scale). Due to the variance in earthquakes, it is essential to understand the Richter scale uses logarithms simply to make the measurements manageable (i.e., a magnitude 3 quake factors 10³ while a magnitude 5 quake is 100 times stronger than that).
Richter magnitudes
The Richter magnitude of an earthquake is determined from the logarithm of the amplitude of waves recorded by seismographs (adjustments are included to compensate for the variation in the distance between the various seismographs and the epicenter of the earthquake). The original formula is:
where is the maximum excursion of the Wood-Anderson seismograph, the empirical function depends only on the epicentral distance of the station, . In practice, readings from all observing stations are averaged after adjustment with station-specific corrections to obtain the value.
Because of the logarithmic basis of the scale, each whole number increase in magnitude represents a tenfold increase in measured amplitude; in terms of energy, each whole number increase corresponds to an increase of about 31.6 times the amount of energy released, and each increase of 0.2 corresponds to approximately a doubling of the energy rel
Document 3:::
The Human-Induced Earthquake Database (HiQuake) is an online database that documents all reported cases of induced seismicity proposed on scientific grounds. It is the most complete compilation of its kind and is freely available to download via the associated website. The database is periodically updated to correct errors, revise existing entries, and add new entries reported in new scientific papers and reports. Suggestions for revisions and new entries can be made via the associated website.
History
In 2016, Nederlandse Aardolie Maatschappij funded a team of researchers from Durham University and Newcastle University to conduct a full review of induced seismicity. This review formed part of a scientific workshop aimed at estimating the maximum possible magnitude earthquake that might be induced by conventional gas production in the Groningen gas field.
The resulting database from the review was publicly released online on the 26 January 2017. The database was accompanied by the publication of two scientific papers, the more detailed of which is freely available online.
Document 4:::
Energy class – also called energy class K or K-class , and denoted by K (from the Russian класс) – is a measure of the force or magnitude of local and regional earthquakes used in countries of the former Soviet Union, and Cuba and Mongolia. K is nominally the logarithm of seismic energy (in Joules) radiated by an earthquake, as expressed in the formula K = log ES. Values of K in the range of 12 to 15 correspond approximately to the range of 4.5 to 6 in other magnitude scales; a magnitude 6.0 quake will register between 13 and 14.5 on various K-class scales.
The energy class system was developed by seismologists of the Soviet Tadzhikskaya Complex [Interdisciplinary] Seismological Expedition established in the remote Garm (Tajikistan) region of Central Asia in 1954 after several devastating earthquakes in that area.
The Garm region is one of the most seismically active regions of the former Soviet Union, with up to 5,000 earthquakes per year. The volume of processing needed, and the rudimentary state of seismological equipment and methods at that time, led the expedition workers to develop new equipment and methods. V. I. Bune is credited with developing a scale based on an earthquake's seismic energy, although S. L. Solov'ev seems to have made major contributions. (In contrast to the "Richter" and other magnitude scales developed by Western seismologists, which estimate the magnitude from the amplitude of some portion of the seismic waves generated, an indirect measure of seismic energy.)
However, proper estimation of ES requires more sophisticated tools than were available at the time, and Bune's method was unworkable. A more practical revision was presented by T. G. Rautian in 1958 and 1960; by 1961 K-class was being used across the USSR. A key change was to estimate ES on the basis of peak amplitude of the seismic waves – particularly, the sum of maximum P-wave and maximum S-wave – within the first three seconds. As a result, K-class became a kind of local magn
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What measurement of the wave is used to determine the magnitude of an earthquake?
A. diameter
B. width
C. height
D. length
Answer:
|
|
sciq-3876
|
multiple_choice
|
Survivorship curves show the distribution of individuals in a population according to what metric?
|
[
"height",
"birth rate",
"weight",
"age"
] |
D
|
Relavent Documents:
Document 0:::
A survivorship curve is a graph showing the number or proportion of individuals surviving to each age for a given species or group (e.g. males or females). Survivorship curves can be constructed for a given cohort (a group of individuals of roughly the same age) based on a life table.
There are three generalized types of survivorship curves:
Type I or convex curves are characterized by high age-specific survival probability in early and middle life, followed by a rapid decline in survival in later life. They are typical of species that produce few offspring but care for them well, including humans and many other large mammals.
Type II or diagonal curves are an intermediate between Types I and III, where roughly constant mortality rate/survival probability is experienced regardless of age. Some birds and some lizards follow this pattern.
Type III or concave curves have the greatest mortality (lowest age-specific survival) early in life, with relatively low rates of death (high probability of survival) for those surviving this bottleneck. This type of curve is characteristic of species that produce a large number of offspring (see r/K selection theory). This includes most marine invertebrates. For example, oysters produce millions of eggs, but most larvae die from predation or other causes; those that survive long enough to produce a hard shell live relatively long.
The number or proportion of organisms surviving to any age is plotted on the y-axis (generally with a logarithmic scale starting with 1000 individuals), while their age (often as a proportion of maximum life span) is plotted on the x-axis.
In mathematical statistics, the survival function is one specific form of survivorship curve and plays a basic part in survival analysis.
There are various reasons that a species exhibits their particular survivorship curve, but one contributor can be environmental factors that decrease survival. For example, an outside element that is nondiscriminatory in the ag
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
The Guidelines for Assessment and Instruction in Statistics Education (GAISE) are a framework for statistics education in grades Pre-K–12 published by the American Statistical Association (ASA) in 2007. The foundations for this framework are the Principles and Standards for School Mathematics published by the National Council of Teachers of Mathematics (NCTM) in 2000. A second report focused on statistics education at the collegiate level, the GAISE College Report, was published in 2005. Both reports were endorsed by the ASA. Several grants awarded by the National Science Foundation explicitly reference the GAISE documents as influencing or guiding the projects, and several popular introductory statistics textbooks have cited the GAISE documents as informing their approach.
The GAISE Report (pre-K–12)
The GAISE document provides a two-dimensional framework, specifying four components used in statistical problem solving (formulating questions, collecting data, analyzing data, and interpreting results) and three levels of conceptual understanding through which a student should progress (Levels A, B, and C). A direct parallel between these conceptual levels and grade levels is not made because most students would begin at Level A when they are first exposed to statistics regardless of whether they are in primary, middle, or secondary school. A student's level of statistical maturity is based on experience rather than age.
The GAISE College Report
The GAISE College Report begins by synthesizing the history and current understanding of introductory statistics courses and then lists goals for students based on statistical literacy. Six recommendations for introductory statistics courses are given, namely:
Emphasize statistical thinking and literacy over other outcomes
Use real data where possible
Emphasize conceptual rather than procedural understanding
Take an active learning approach
Analyze data using technology rather than by hand
Focus on supporting student
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Survivorship curves show the distribution of individuals in a population according to what metric?
A. height
B. birth rate
C. weight
D. age
Answer:
|
|
sciq-404
|
multiple_choice
|
What is composed of two strands of nucleotides in a double-helical structure?
|
[
"molecule",
"bacteria",
"RNA",
"dna"
] |
D
|
Relavent Documents:
Document 0:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 1:::
What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"
In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.
Background
The book, published i
Document 2:::
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
Document 3:::
In molecular biology, the term double helix refers to the structure formed by double-stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The term entered popular culture with the publication in 1968 of The Double Helix: A Personal Account of the Discovery of the Structure of DNA by James Watson.
The DNA double helix biopolymer of nucleic acid is held together by nucleotides which base pair together. In B-DNA, the most common double helical structure found in nature, the double helix is right-handed with about 10–10.5 base pairs per turn. The double helix structure of DNA contains a major groove and minor groove. In B-DNA the major groove is wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to B-DNA do so through the wider major groove.
History
The double-helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based on the work of Rosalind Franklin and her student Raymond Gosling, who took the crucial X-ray diffraction image of DNA labeled as "Photo 51", and Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. Before this, Linus Pauling—who had already accurately characterised the conformation of protein secondary structure motifs—and his collaborator Robert Corey had posited, erroneously, that DNA would adopt a triple-stranded conformation.
The realization that the structure of DNA is that of a double-helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one-third
Document 4:::
A DNA machine is a molecular machine constructed from DNA. Research into DNA machines was pioneered in the late 1980s by Nadrian Seeman and co-workers from New York University. DNA is used because of the numerous biological tools already found in nature that can affect DNA, and the immense knowledge of how DNA works previously researched by biochemists.
DNA machines can be logically designed since DNA assembly of the double helix is based on strict rules of base pairing that allow portions of the strand to be predictably connected based on their sequence. This "selective stickiness" is a key advantage in the construction of DNA machines.
An example of a DNA machine was reported by Bernard Yurke and co-workers at Lucent Technologies in the year 2000, who constructed molecular tweezers out of DNA.
The DNA tweezers contain three strands: A, B and C. Strand A latches onto half of strand B and half of strand C, and so it joins them all together. Strand A acts as a hinge so that the two "arms" — AB and AC — can move. The structure floats with its arms open wide. They can be pulled shut by adding a fourth strand of DNA (D) "programmed" to stick to both of the dangling, unpaired sections of strands B and C. The closing of the tweezers was proven by tagging strand A at either end with light-emitting molecules that do not emit light when they are close together. To re-open the tweezers add a further strand (E) with the right sequence to pair up with strand D. Once paired up, they have no connection to the machine BAC, so float away. The DNA machine can be opened and closed repeatedly by cycling between strands D and E. These tweezers can be used for removing drugs from inside fullerenes as well as from a self assembled DNA tetrahedron. The state of the device can be determined by measuring the separation between donor and acceptor fluorophores using FRET.
DNA walkers are another type of DNA machine.
See also
DNA nanotechnology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is composed of two strands of nucleotides in a double-helical structure?
A. molecule
B. bacteria
C. RNA
D. dna
Answer:
|
|
sciq-5042
|
multiple_choice
|
Stones, infections, and diabetes threaten the health and functioning of what paired organs?
|
[
"arteries",
"tissues",
"lungs",
"kidneys"
] |
D
|
Relavent Documents:
Document 0:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 1:::
The following outline is provided as an overview of and topical guide to medicine:
Medicine – science of healing. It encompasses a variety of health care practices evolved to maintain health by the prevention and treatment of illness.
Aims
Cure
Health
Homeostasis
Medical ethics
Prevention of illness
Palliation
Branches of medicine
Anesthesiology – practice of medicine dedicated to the relief of pain and total care of the surgical patient before, during and after surgery.
Cardiology – branch of medicine that deals with disorders of the heart and the blood vessels.
Critical care medicine – focuses on life support and the intensive care of the seriously ill.
Dentistry – branch of medicine that deals with treatment of diseases in the oral cavity
Dermatology – branch of medicine that deals with the skin, hair, and nails.
Emergency medicine – focuses on care provided in the emergency department
Endocrinology – branch of medicine that deals with disorders of the endocrine system.
Epidemiology – study of cause and prevalence of diseases and programs to contain them
First aid – assistance given to any person experiencing a sudden illness or injury, with care provided to preserve life, prevent the condition from worsening, and/or promote recovery. It includes initial intervention in a serious condition prior to professional medical help being available, such as performing CPR while awaiting an ambulance, as well as the complete treatment of minor conditions, such as applying a plaster to a cut.
Gastroenterology – branch of medicine that deals with the study and care of the digestive system.
General practice (often called family medicine) is a branch of medicine that specializes in primary care.
Geriatrics – branch of medicine that deals with the general health and well-being of the elderly.
Gynaecology – diagnosis and treatment of the female reproductive system
Hematology – branch of medicine that deals with the blood and the circulatory system.
Hepatology – branch o
Document 2:::
The Intersociety Council for Pathology Information (ICPI) is a nonprofit educational organization that provides information about academic paths and career options in medical and research pathology.
Directory of Pathology Training Programs in the United States and Canada
ICPI publishes the annual Directory of Pathology Training Programs in the United States and Canada and a companion online searchable directory.
Career Development Resources
The Pathology: A Career in Medicine brochure describes the role of a pathologist in medical, research, and academic settings.
Pathology: A Career in Medicine
Sponsors
ICPI is sponsored by five charter pathology societies and twelve Associate member societies in North America.
Awards and Grants
Travel Awards support participation of medical students, graduate students, residents, and fellows in the scientific meetings of its sponsoring societies.
Career Outreach Grants promote awareness of pathology to the public, media, students, and professional and educational organizations.
The Medical Student Interest Group Matching Grants (MSIGs) encourages medical students to consider pathology as a career by providing funds to pathology departments to support MSIGs.
Document 3:::
TIME-ITEM is an ontology of Topics that describes the content of undergraduate medical education. TIME is an acronym for "Topics for Indexing Medical Education"; ITEM is an acronym for "Index de thèmes pour l’éducation médicale." Version 1.0 of the taxonomy has been released and the web application that allows users to work with it is still under development. Its developers are seeking more collaborators to expand and validate the taxonomy and to guide future development of the web application.
History
The development of TIME-ITEM began at the University of Ottawa in 2006. It was initially developed to act as a content index for a curriculum map being constructed there. After its initial presentation at the 2006 conference of the Canadian Association for Medical Education, early collaborators included the University of British Columbia, McMaster University and Queen's University.
Features
The TIME-ITEM ontology is unique in that it is designed specifically for undergraduate medical education. As such, it includes fewer strictly biomedical entries than other common medical vocabularies (such as MeSH or SNOMED CT) but more entries relating to the medico-social concepts of communication, collaboration, professionalism, etc.
Topics within TIME-ITEM are arranged poly-hierarchically, meaning any Topic can have more than one parent. Relationships are established based on the logic that learning about a Topic contributes to the learning of all its parent Topics.
In addition to housing the ontology of Topics, the TIME-ITEM web application can house multiple Outcome frameworks. All Outcomes, whether private Outcomes entered by single institutions or publicly available medical education Outcomes (such as CanMeds 2005) are hierarchically linked to one or more Topics in the ontology. In this way, the contribution of each Topic to multiple Outcomes is made explicit.
The structure of the XML documents exported from TIME-ITEM (which contain the hierarchy of Outco
Document 4:::
LabTV is an online hub where people, labs, and organizations engaged in medical research come together to tell their stories. LabTV has filmed hundreds of medical researchers at dozens of institutions across the United States, including dozens at the National Institutes of Health.
Brief History
LabTV is a private company that was founded in 2013 by entrepreneur Jay Walker as a way to help get more students to consider a career in medical research. In 2014, Mr. Walker and LabTV’s executive producer David Hoffman received Disruptor Innovation Awards at the 2014 Tribeca Film Festival for LabTV’s work in getting university students around the country to create short personal interviews of National Institutes of Health-funded medical researchers.
Winners of the LabTV contest included student filmmakers from Columbia University, the University of Delaware, Cornell University, University of Hawaii, University of Pennsylvania, Tufts University, George Washington University, the University of Virginia, The University of Chicago, and the University of Georgia among others. LabTV continues to film medical researchers at dozens of universities and organizations, including the National Institutes of Health and Georgetown University
See also
National Institutes of Health
Medical research
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Stones, infections, and diabetes threaten the health and functioning of what paired organs?
A. arteries
B. tissues
C. lungs
D. kidneys
Answer:
|
|
sciq-8292
|
multiple_choice
|
In passive transport, small molecules or ions move across the cell membrane without requiring input of what?
|
[
"energy",
"pressure",
"water",
"heating"
] |
A
|
Relavent Documents:
Document 0:::
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
Passive transport follows Fick's first law.
Diffusion
Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient").
However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorph
Document 1:::
Transcellular transport involves the transportation of solutes by a cell through a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Active Transport
Main article: Active transport
Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. There are two types of active transport, primary active transport and secondary active transport. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. Examples of molecules that follow this process are potassium K+, sodium Na+, and calcium Ca2+. A place in the human body where this occurs is in the intestines with the uptake of glucose. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration. An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT).
Passive Transport
Main article: Passive transport
Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. There are two types of passive transport, passive diffusion and facilitated diffusion. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. One example of passive diffusion is the gas exchange that occurs between the oxygen in the blood and the carbon dioxide present in the lungs. Facilitated diffusion is the movement of polar molecules down the concentration gradient with the assistance of membrane proteins. Since the molecules associated with facilitated diffusion are polar, they are repelled by the hydrophobic sections of permeable membrane, therefore they need to be assisted by the membrane proteins. Both t
Document 2:::
In cellular biology, membrane transport refers to the collection of mechanisms that regulate the passage of solutes such as ions and small molecules through biological membranes, which are lipid bilayers that contain proteins embedded in them. The regulation of passage through the membrane is due to selective membrane permeability – a characteristic of biological membranes which allows them to separate substances of distinct chemical nature. In other words, they can be permeable to certain substances but not to others.
The movements of most solutes through the membrane are mediated by membrane transport proteins which are specialized to varying degrees in the transport of specific molecules. As the diversity and physiology of the distinct cells is highly related to their capacities to attract different external elements, it is postulated that there is a group of specific transport proteins for each cell type and for every specific physiological stage. This differential expression is regulated through the differential transcription of the genes coding for these proteins and its translation, for instance, through genetic-molecular mechanisms, but also at the cell biology level: the production of these proteins can be activated by cellular signaling pathways, at the biochemical level, or even by being situated in cytoplasmic vesicles. The cell membrane regulates the transport of materials entering and exiting the cell.
Background
Thermodynamically the flow of substances from one compartment to another can occur in the direction of a concentration or electrochemical gradient or against it. If the exchange of substances occurs in the direction of the gradient, that is, in the direction of decreasing potential, there is no requirement for an input of energy from outside the system; if, however, the transport is against the gradient, it will require the input of energy, metabolic energy in this case.
For example, a classic chemical mechanism for separation that does
Document 3:::
In cellular biology, active transport is the movement of molecules or ions across a cell membrane from a region of lower concentration to a region of higher concentration—against the concentration gradient. Active transport requires cellular energy to achieve this movement. There are two types of active transport: primary active transport that uses adenosine triphosphate (ATP), and secondary active transport that uses an electrochemical gradient. This process is in contrast to passive transport, which allows molecules or ions to move down their concentration gradient, from an area of high concentration to an area of low concentration, without energy.
Active transport is essential for various physiological processes, such as nutrient uptake, hormone secretion, and nerve impulse transmission. For example, the sodium-potassium pump uses ATP to pump sodium ions out of the cell and potassium ions into the cell, maintaining a concentration gradient essential for cellular function. Active transport is highly selective and regulated, with different transporters specific to different molecules or ions. Dysregulation of active transport can lead to various disorders, including cystic fibrosis, caused by a malfunctioning chloride channel, and diabetes, resulting from defects in glucose transport into cells.
Active cellular transportation (ACT)
Unlike passive transport, which uses the kinetic energy and natural entropy of molecules moving down a gradient, active transport uses cellular energy to move them against a gradient, polar repulsion, or other resistance. Active transport is usually associated with accumulating high concentrations of molecules that the cell needs, such as ions, glucose and amino acids. Examples of active transport include the uptake of glucose in the intestines in humans and the uptake of mineral ions into root hair cells of plants.
History
In 1848, the German physiologist Emil du Bois-Reymond suggested the possibility of active transport of subst
Document 4:::
Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. It is in contrast to transcellular transport, where the substances travel through the cell, passing through both the apical membrane and basolateral membrane.
The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, or by osmosis (for water) and solvent drag for solutes. Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated.
In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters. Paracellular absorption therefore plays only a minor role in glucose absorption, although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut.
Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport.
The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract. The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In passive transport, small molecules or ions move across the cell membrane without requiring input of what?
A. energy
B. pressure
C. water
D. heating
Answer:
|
|
scienceQA-2469
|
multiple_choice
|
What do these two changes have in common?
peeling a banana
tying a shoelace
|
[
"Both are only physical changes.",
"Both are caused by cooling.",
"Both are caused by heating.",
"Both are chemical changes."
] |
A
|
Step 1: Think about each change.
Peeling a banana is a physical change. The peel is not covering the rest of the fruit anymore. But both the peel and the banana are still made of the same type of matter as before.
Tying a shoelace is a physical change. The shoelace gets a different shape. But it is still made of the same type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
peeling a banana
tying a shoelace
A. Both are only physical changes.
B. Both are caused by cooling.
C. Both are caused by heating.
D. Both are chemical changes.
Answer:
|
sciq-5720
|
multiple_choice
|
What is the name for spherical groups of old stars held tightly together by gravity?
|
[
"dark matter",
"nebula",
"globular clusters",
"elliptical clusters"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of astronomical objects named after people. While topological features on Solar System bodies — such as craters, mountains, and valleys — are often named after famous or historical individuals, many stars and deep-sky objects are named after the individual(s) who discovered or otherwise studied it.
This list does not include astronomical objects named after mythological or fictional characters.
Clusters and groups
Stars
Al Sufi's cluster, also called Brocchi's Cluster, is a coathanger-shaped asterism located in Vulpecula named after Abd al-Rahman al-Sufi and Dalmero Francis Brocchi.
Blanco 1 is an open cluster in Sculptor, named after Victor Manuel Blanco.
Caroline's Cluster (NGC 2360) is an open cluster in Canis Major, named after Caroline Herschel.
Caroline's Rose (NGC 7789) is an open cluster in Cassiopeia, named after Caroline Herschel.
Gould Belt is a ring of stars in the Orion Arm of the Milky Way, named after Benjamin Apthorp Gould
Grindlay 1 is a globular star cluster in Scorpius, named after Jonathan E. Grindlay.
Kemble's Cascade and Kemble's Kite are two asterisms in Camelopardalis, named after Lucian Kemble.
Liller 1 is a globular star cluster in Scorpius, named after William Liller.
Picot 1, also called Napoleon's Hat, is an asterism in Boötes, named after Fulbert Picot.
Ptolemy's Cluster (Messier 7) is an open star cluster in Scorpius, named after Ptolemy.
Webb's wreath is a telescopic asterism in Hercules, named after Thomas William Webb.
Galaxies
Burbidge Chain is a group of galaxies located in Cetus, named after Margaret Burbidge.
Clowes–Campusano LQG is a large quasar group in Leo, named after Roger Clowes and Luis Campusano.
Copeland Septet is group of seven galaxies in Leo, named after Ralph Copeland.
Keenan's System (Arp 104) is a pair of connected galaxies in Ursa Major, named after Philip Childs Keenan.
Markarian's Chain is a chain of galaxies in the Virgo Cluster, named after Benjamin Markarian
Robert's Quartet is a group
Document 1:::
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl
Document 2:::
Galactic clusters are gravitationally bound large-scale structures of multiple galaxies. The evolution of these aggregates is determined by time and manner of formation and the process of how their structures and constituents have been changing with time. Gamow (1952) and Weizscker (1951) showed that the observed rotations of galaxies are important for cosmology. They postulated that the rotation of galaxies might be a clue of physical conditions under which these systems formed. Thus, understanding the distribution of spatial orientations of the spin vectors of galaxies is critical to understanding the origin of the angular momenta of galaxies.
There are mainly three scenarios for the origin of galaxy clusters and superclusters. These models are based on different assumptions of the primordial conditions, so they predict different spin vector alignments of the galaxies. The three hypotheses are the pancake model, the hierarchy model, and the primordial vorticity theory. The three are mutually exclusive as they produce contradictory predictions. However, the predictions made by all three theories are based on the precepts of cosmology. Thus, these models can be tested using a database with appropriate methods of analysis.
Galaxies
A galaxy is a large gravitational aggregation of stars, dust, gas, and an unknown component termed dark matter. The Milky Way Galaxy is only one of the billions of galaxies in the known universe. Galaxies are classified into spirals, ellipticals, irregular, and peculiar. Sizes can range from only a few thousand stars (dwarf irregulars) to 1013 stars in giant ellipticals. Elliptical galaxies are spherical or elliptical in appearance. Spiral galaxies range from S0, the lenticular galaxies, to Sb, which have a bar across the nucleus, to Sc galaxies which have strong spiral arms. In total count, ellipticals amount to 13%, S0 to 22%, Sa, b, c galaxies to 61%, irregulars to 3.5%, and peculiars to 0.9%.
At the center of most galaxies is a
Document 3:::
Types
Quasar
Supermassive black hole
Hypercompact stellar system (hypothetical object organized around a supermassive black hole)
Intermediate-mass black holes and candidates
Cigar Galaxy (Messier 82, NGC 3034)
GCIRS 13E
HLX-1
M82 X-1
Messier 15 (NGC 7078)
Messier 110 (NGC 205)
Sculptor Galaxy (NGC 253)
Triangulum Galaxy (Messier 33, NGC 598
Document 4:::
The Morphs collaboration was a coordinated study to determine the morphologies of galaxies in distant clusters and to investigate the evolution of galaxies as a function of environment and epoch. Eleven clusters were examined and a detailed ground-based and space-based study was carried out.
The project was begun in 1997 based upon the earlier observations by two groups using data from images derived from the pre-refurbished Hubble Space Telescope. It was a collaboration of Alan Dressler and Augustus Oemler, Jr., at Observatory of the Carnegie Institute of Washington, Warrick J. Couch at the University of New South Wales, Richard Ellis at Caltech, Bianca Poggianti at the University of Padua, Amy Barger at the University of Hawaii's Institute for Astronomy, Harvey Butcher at ASTRON, and Ray M. Sharples and Ian Smail at Durham University. Results were published through 2000.
The collaboration sought answers to the differences in the origins of the various galaxy types — elliptical, lenticular, and spiral. The studies found that elliptical galaxies were the oldest and formed from the violent merger of other galaxies about two to three billion years after the Big Bang. Star formation in elliptical galaxies ceased about that time. On the other hand, new stars are still forming in the spiral arms of spiral galaxies. Lenticular galaxies (SO) are intermediate between the first two. They contain structures similar to spiral arms, but devoid of the gas and new stars of the spiral galaxies. Lenticular galaxies are the prevalent form in rich galaxy clusters, which suggests that spirals may be transformed into lenticular galaxies as time progresses. The exact process may be related to high galactic density, or to the total mass in a rich cluster's central core. The Morphs collaboration found that one of the principal mechanisms of this transformation involves the interaction among spiral galaxies, as they fall toward the core of the cluster.
The Inamori Magellan Areal Camer
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name for spherical groups of old stars held tightly together by gravity?
A. dark matter
B. nebula
C. globular clusters
D. elliptical clusters
Answer:
|
|
sciq-9284
|
multiple_choice
|
What law states that matter cannot be created or destroyed?
|
[
"law of inertia",
"conservation of energy",
"construct of energy",
"Murphy's Law"
] |
B
|
Relavent Documents:
Document 0:::
Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented.
Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical theorems or identities do. A scientific law may be contradicted, restricted, or extended by future observations.
A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical
Document 1:::
The principle of mutability is the notion that any physical property which appears to follow a conservation law may undergo some physical process that violates its conservation. John Archibald Wheeler offered this speculative principle after Stephen Hawking predicted the evaporation of black holes which violates baryon number conservation.
See also
Philosophy of physics
Document 2:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
Classically, conservation of energy was distinct from conservation of mass. However, special relativity shows that mass is related to energy and vice versa by , the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass can itself be converted to pure energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.
Given the stationary-action principle, conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time.
A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, conservation of energy can arguably be violated by general relativity on the cosmological scale.
History
Ancient philosophers as far back as Thales of Miletus 550 BCE had inklings of the conservation of some underlying substance of which ev
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What law states that matter cannot be created or destroyed?
A. law of inertia
B. conservation of energy
C. construct of energy
D. Murphy's Law
Answer:
|
|
sciq-10569
|
multiple_choice
|
What celestial structure does not give off its own light?
|
[
"the moon",
"the stars",
"meteors",
"asteroids"
] |
A
|
Relavent Documents:
Document 0:::
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl
Document 1:::
Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments.
Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world.
In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below).
The National Aeronautics and
Document 2:::
BL Herculis variables are a subclass of type II Cepheids with low luminosity and mass, that have a period of less than eight days. They are pulsating stars with light curves that frequently show a bump on the descending side for stars of the shortest periods and on the ascending side for longer period stars. Like other type II Cepheids, they are very old population II stars found in the galaxy’s halo and globular clusters. Also, compared to other type II Cepheids, BL Herculis variables have shorter periods and are fainter than W Virginis variables. Pulsating stars vary in spectral class as they vary in brightness and BL Herculis variables are normally class A at their brightest and class F when most dim. When plotted on the Hertzsprung–Russell diagram they fall in-between W Virginis and RR Lyrae variables.
The prototype star, BL Herculis, varies between magnitude 9.7 and 10.6 in a period of 1.3 days. The brightest BL Herculis variables, with their maximum magnitudes, are:
VY Pyxidis, 7.7
V553 Centauri, 8.2
SW Tauri, 9.3
RT Trianguli Australis, 9.4
V351 Cephei, 9.5
BL Herculis. 9.7
BD Cassiopeiae, 10.8
UY Eridani, 10.9
The BL Herculis stars show a wide variety of light curves, temperatures, and luminosity, and three subdivisions of the class have been defined, with the acronym AHB referring to above horizontal branch:
XX Virginis stars (AHB1), with very fast rises to maximum and low metallicity
CW stars (AHB2), W Virginis variables, longer periods, the bump on the ascending leg
BL Herculis stars (AHB3), shorter periods, the bump on the descending leg
Document 3:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 4:::
Starspots are stellar phenomena, so-named by analogy with sunspots.
Spots as small as sunspots have not been detected on other stars, as they would cause undetectably small fluctuations in brightness. The commonly observed starspots are in general much larger than those on the Sun: up to about 30% of the stellar surface may be covered, corresponding to starspots 100 times larger than those on the Sun.
Detection and measurements
To detect and measure the extent of starspots one uses several types of methods.
For rapidly rotating stars – Doppler imaging and Zeeman-Doppler imaging. With the Zeeman-Doppler imaging technique the direction of the magnetic field on stars can be determined since spectral lines are split according to the Zeeman effect, revealing the direction and magnitude of the field.
For slowly rotating stars – Line Depth Ratio (LDR). Here one measures two different spectral lines, one sensitive to temperature and one which is not. Since starspots have a lower temperature than their surroundings the temperature-sensitive line changes its depth. From the difference between these two lines the temperature and size of the spot can be calculated, with a temperature accuracy of 10K.
For eclipsing binary stars – Eclipse mapping produces images and maps of spots on both stars.
For giant binary stars - Very-long-baseline interferometry
For stars with transiting extrasolar planets – Light curve variations.
Temperature
Observed starspots have a temperature which is in general 500–2000 kelvins cooler than the stellar photosphere. This temperature difference could give rise to a brightness variation up to 0.6 magnitudes between the spot and the surrounding surface. There also seems to be a relation between the spot temperature and the temperature for the stellar photosphere, indicating that starspots behave similarly for different types of stars (observed in G–K dwarfs).
Lifetimes
The lifetime for a starspot depends on its size.
For small spots the lifetim
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What celestial structure does not give off its own light?
A. the moon
B. the stars
C. meteors
D. asteroids
Answer:
|
|
sciq-9554
|
multiple_choice
|
Tuna have been shown to contain high levels of what metal?
|
[
"iron",
"cadmium",
"mercury",
"titanium"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Tuna have been shown to contain high levels of what metal?
A. iron
B. cadmium
C. mercury
D. titanium
Answer:
|
|
sciq-2098
|
multiple_choice
|
How are the major families of organic compounds characterized?
|
[
"their visual groups",
"their functional groups",
"their thermal groups",
"Their optic groups"
] |
B
|
Relavent Documents:
Document 0:::
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma
Document 1:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
The SYBYL line notation or SLN is a specification for unambiguously describing the structure of chemical molecules using short ASCII strings. SLN differs from SMILES in several significant ways. SLN can specify molecules, molecular queries, and reactions in a single line notation whereas SMILES handles these through language extensions. SLN has support for relative stereochemistry, it can distinguish mixtures of enantiomers from pure molecules with pure but unresolved stereochemistry. In SMILES aromaticity is considered to be a property of both atoms and bonds whereas in SLN it is a property of bonds.
Description
Like SMILES, SLN is a linear language that describes molecules. This provides a lot of similarity with SMILES despite SLN's many differences from SMILES, and as a result this description will heavily compare SLN to SMILES and its extensions.
Attributes
Attributes, bracketed strings with additional data like [key1=value1, key2...], is a core feature of SLN. Attributes can be applied to atoms and bonds. Attributes not defined officially are available to users for private extensions.
When searching for molecules, comparison operators such as fcharge>-0.125 can be used in place of the usual equal sign. A ! preceding a key/value group inverts the result of the comparison.
Entire molecules or reactions can too have attributes. The square brackets are changed to a pair of <> signs.
Atoms
Anything that starts with an uppercase letter identifies an atom in SLN. Hydrogens are not automatically added, but the single bonds with hydrogen can be abbreviated for organic compounds, resulting in CH4 instead of C(H)(H)(H)H for methane. The author argues that explicit hydrogens allow for more robust parsing.
Attributes defined for atoms include I= for isotope mass number, charge= for formal charge, fcharge for partial charge, s= for stereochemistry, and spin= for radicals (s, d, t respectively for singlet, doublet, triplet). A formal charge of charge=2 can be abbrevi
Document 4:::
Phytochemistry is the study of phytochemicals, which are chemicals derived from plants. Phytochemists strive to describe the structures of the large number of secondary metabolites found in plants, the functions of these compounds in human and plant biology, and the biosynthesis of these compounds. Plants synthesize phytochemicals for many reasons, including to protect themselves against insect attacks and plant diseases. The compounds found in plants are of many kinds, but most can be grouped into four major biosynthetic classes: alkaloids, phenylpropanoids, polyketides, and terpenoids.
Phytochemistry can be considered a subfield of botany or chemistry. Activities can be led in botanical gardens or in the wild with the aid of ethnobotany. Phytochemical studies directed toward human (i.e. drug discovery) use may fall under the discipline of pharmacognosy, whereas phytochemical studies focused on the ecological functions and evolution of phytochemicals likely fall under the discipline of chemical ecology. Phytochemistry also has relevance to the field of plant physiology.
Techniques
Techniques commonly used in the field of phytochemistry are extraction, isolation, and structural elucidation (MS,1D and 2D NMR) of natural products, as well as various chromatography techniques (MPLC, HPLC, and LC-MS).
Phytochemicals
Many plants produce chemical compounds for defence against herbivores. The major classes of pharmacologically active phytochemicals are described below, with examples of medicinal plants that contain them. Human settlements are often surrounded by weeds containing phytochemicals, such as nettle, dandelion and chickweed.
Many phytochemicals, including curcumin, epigallocatechin gallate, genistein, and resveratrol are pan-assay interference compounds and are not useful in drug discovery.
Alkaloids
Alkaloids are bitter-tasting chemicals, widespread in nature, and often toxic. There are several classes with different modes of action as drugs, both recre
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How are the major families of organic compounds characterized?
A. their visual groups
B. their functional groups
C. their thermal groups
D. Their optic groups
Answer:
|
|
sciq-5576
|
multiple_choice
|
Power in electricity is the voltage multiplied by what?
|
[
"the current",
"amperes",
"wattage",
"power"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 4:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Power in electricity is the voltage multiplied by what?
A. the current
B. amperes
C. wattage
D. power
Answer:
|
|
sciq-10670
|
multiple_choice
|
Earthworms and ants possess what type of bodies, which means division into multiple parts?
|
[
"elliptical",
"elongated",
"truncated",
"segmented"
] |
D
|
Relavent Documents:
Document 0:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 1:::
In biology, metamerism is the phenomenon of having a linear series of body segments fundamentally similar in structure, though not all such structures are entirely alike in any single life form because some of them perform special functions.
In animals, metameric segments are referred to as somites or metameres. In plants, they are referred to as metamers or, more concretely, phytomers.
In animals
In animals, zoologists define metamery as a mesodermal event resulting in serial repetition of unit subdivisions of ectoderm and mesoderm products. Endoderm is not involved in metamery. Segmentation is not the same concept as metamerism: segmentation can be confined only to ectodermally derived tissue, e.g., in the Cestoda tapeworms. Metamerism is far more important biologically since it results in metameres - also called somites - that play a critical role in advanced locomotion.
One can divide metamerism into two main categories:
homonomous metamery is a strict serial succession of metameres. It can be grouped into two more classifications known as pseudometamerism and true metamerism. An example of pseudometamerism is in the class Cestoda. The tapeworm is composed of many repeating segments - primarily for reproduction and basic nutrient exchange. Each segment acts independently from the others, which is why it is not considered true metamerism. Another worm, the earthworm in phylum Annelida, can exemplify true metamerism. In each segment of the worm, a repetition of organs and muscle tissue can be found. What differentiates the Annelids from Cestoda is that the segments in the earthworm all work together for the whole organism. It is believed that segmentation evolved for many reasons, including a higher degree of motion. Taking the earthworm, for example: the segmentation of the muscular tissue allows the worm to move in an inching pattern. The circular muscles work to allow the segments to elongate one by one, and the longitudinal muscles then work to shorten th
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 4:::
Advanced Level (A-Level) Mathematics is a qualification of further education taken in the United Kingdom (and occasionally other countries as well). In the UK, A-Level exams are traditionally taken by 17-18 year-olds after a two-year course at a sixth form or college. Advanced Level Further Mathematics is often taken by students who wish to study a mathematics-based degree at university, or related degree courses such as physics or computer science.
Like other A-level subjects, mathematics has been assessed in a modular system since the introduction of Curriculum 2000, whereby each candidate must take six modules, with the best achieved score in each of these modules (after any retake) contributing to the final grade. Most students will complete three modules in one year, which will create an AS-level qualification in their own right and will complete the A-level course the following year—with three more modules.
The system in which mathematics is assessed is changing for students starting courses in 2017 (as part of the A-level reforms first introduced in 2015), where the reformed specifications have reverted to a linear structure with exams taken only at the end of the course in a single sitting.
In addition, while schools could choose freely between taking Statistics, Mechanics or Discrete Mathematics (also known as Decision Mathematics) modules with the ability to specialise in one branch of applied Mathematics in the older modular specification, in the new specifications, both Mechanics and Statistics were made compulsory, with Discrete Mathematics being made exclusive as an option to students pursuing a Further Mathematics course. The first assessment opportunity for the new specification is 2018 and 2019 for A-levels in Mathematics and Further Mathematics, respectively.
2000s specification
Prior to the 2017 reform, the basic A-Level course consisted of six modules, four pure modules (C1, C2, C3, and C4) and two applied modules in Statistics, Mechanics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Earthworms and ants possess what type of bodies, which means division into multiple parts?
A. elliptical
B. elongated
C. truncated
D. segmented
Answer:
|
|
sciq-1674
|
multiple_choice
|
What is the number of protons in the nucleus?
|
[
"atomic number",
"metallic number",
"atomic mass",
"element"
] |
A
|
Relavent Documents:
Document 0:::
Nuclear density is the density of the nucleus of an atom. For heavy nuclei, it is close to the nuclear saturation density nucleons/fm3, which minimizes the energy density of an infinite nuclear matter. The nuclear saturation mass density is thus kg/m3, where mu is the atomic mass constant. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars.
Evaluation
The nuclear density of a typical nucleus can be approximately calculated from the size of the nucleus, which itself can be approximated based on the number of protons and neutrons in it. The radius of a typical nucleus, in terms of number of nucleons, is
where is the mass number and is 1.25 fm, with typical deviations of up to 0.2 fm from this value. The number density of the nucleus is thus:
The density for any typical nucleus, in terms of mass number, is thus constant, not dependent on A or R, theoretically:
The experimentally determined value for the nuclear saturation density is
The mass density ρ is the product of the number density n by the particle's mass. The calculated mass density, using a nucleon mass of mn=1.67×10−27 kg, is thus:
(using the theoretical estimate)
or
(using the experimental value).
Applications and extensions
The components of an atom and of a nucleus have varying densities. The proton is not a fundamental particle, being composed of quark–gluon matter. Its size is approximately 10−15 meters and its density 1018 kg/m3. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars.
Using deep inelastic scattering, it has been estimated that the "size" of an electron, if it is not a point particle, must be less than 10−17 meters. This would correspond to a density of roughly 1021 kg/m3.
There are possibilities for still-higher densities when it comes to quark matter. In the near future, the highest experimentally measur
Document 1:::
Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties.
The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy.
The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number.
For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively.
Isotope vs. nuclide
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 4:::
The proton radius puzzle is an unanswered problem in physics relating to the size of the proton. Historically the proton charge radius was measured by two independent methods, which converged to a value of about 0.877 femtometres (1 fm = 10−15 m). This value was challenged by a 2010 experiment using a third method, which produced a radius about 4% smaller than this, at 0.842 femtometres. New experimental results reported in the autumn of 2019 agree with the smaller measurement, as does a re-analysis of older data published in 2022. While some believe that this difference has been resolved, this opinion is not yet universally held.
Problem
Prior to 2010, the proton charge radius was measured using one of two methods: one relying on spectroscopy, and one relying on nuclear scattering.
Spectroscopy method
The spectroscopy method uses the energy levels of electrons orbiting the nucleus. The exact values of the energy levels are sensitive to the distribution of charge in the nucleus. For hydrogen, whose nucleus consists only of one proton, this indirectly measures the proton charge radius. Measurements of hydrogen's energy levels are now so precise that the accuracy of the proton radius is the limiting factor when comparing experimental results to theoretical calculations. This method produces a proton radius of about , with approximately 1% relative uncertainty.
Nuclear scattering
The nuclear method is similar to Rutherford's scattering experiments that established the existence of the nucleus. Small particles such as electrons can be fired at a proton, and by measuring how the electrons are scattered, the size of the proton can be inferred. Consistent with the spectroscopy method, this produces a proton radius of about .
2010 experiment
In 2010, Pohl et al. published the results of an experiment relying on muonic hydrogen as opposed to normal hydrogen. Conceptually, this is similar to the spectroscopy method. However, the much higher mass of a muon causes it to orb
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the number of protons in the nucleus?
A. atomic number
B. metallic number
C. atomic mass
D. element
Answer:
|
|
sciq-5456
|
multiple_choice
|
All chordates are deuterostomes possessing a what?
|
[
"notochord",
"endoderm",
"zygote",
"hydrochord"
] |
A
|
Relavent Documents:
Document 0:::
N. europaea shows short rods with pointed ends cells, which size is (0.8-1.1 x 1.0- 1.7) µm; motility has not been observed.
N. eutropha presents rod to pear shaped cells with one or both ends pointed, with a size of (1.0-1.3 x 1.6- 2.3) µm. They show motility.
N. halophila cells have a coccoid shap
Document 1:::
In the field of developmental biology, regional differentiation is the process by which different areas are identified in the development of the early embryo. The process by which the cells become specified differs between organisms.
Cell fate determination
In terms of developmental commitment, a cell can either be specified or it can be determined. Specification is the first stage in differentiation. A cell that is specified can have its commitment reversed while the determined state is irreversible. There are two main types of specification: autonomous and conditional. A cell specified autonomously will develop into a specific fate based upon cytoplasmic determinants with no regard to the environment the cell is in. A cell specified conditionally will develop into a specific fate based upon other surrounding cells or morphogen gradients. Another type of specification is syncytial specification, characteristic of most insect classes.
Specification in sea urchins uses both autonomous and conditional mechanisms to determine the anterior/posterior axis. The anterior/posterior axis lies along the animal/vegetal axis set up during cleavage. The micromeres induce the nearby tissue to become endoderm while the animal cells are specified to become ectoderm. The animal cells are not determined because the micromeres can induce the animal cells to also take on mesodermal and endodermal fates. It was observed that β-catenin was present in the nuclei at the vegetal pole of the blastula. Through a series of experiments, one study confirmed the role of β-catenin in the cell-autonomous specification of vegetal cell fates and the micromeres inducing ability. Treatments of lithium chloride sufficient to vegetalize the embryo resulted in increases in nuclearly localized b-catenin. Reduction of expression of β-catenin in the nucleus correlated with loss of vegetal cell fates. Transplants of micromeres lacking nuclear accumulation of β-catenin were unable to induce a second axis.
Document 2:::
An equivalence group is a set of unspecified cells that have the same developmental potential or ability to adopt various fates. Our current understanding suggests that equivalence groups are limited to cells of the same ancestry, also known as sibling cells. Often, cells of an equivalence group adopt different fates from one another.
Equivalence groups assume various potential fates in two general, non-mutually exclusive ways. One mechanism, induction, occurs when a signal originating from outside of the equivalence group specifies a subset of the naïve cells. Another mode, known as lateral inhibition, arises when a signal within an equivalence group causes one cell to adopt a dominant fate while others in the group are inhibited from doing so. In many examples of equivalence groups, both induction and lateral inhibition are used to define patterns of distinct cell types.
Cells of an equivalence group that do not receive a signal adopt a default fate. Alternatively, cells that receive a signal take on different fates. At a certain point, the fates of cells within an equivalence group become irreversibly determined, thus they lose their multipotent potential. The following provides examples of equivalence groups studied in nematodes and ascidians.
Vulva Precursor Cell Equivalence Group
Introduction
A classic example of an equivalence group is the vulva precursor cells (VPCs) of nematodes. In Caenorhabditis elegans self-fertilized eggs exit the body through the vulva. This organ develops from a subset of cell of an equivalence group consisting of six VPCs, P3.p-P8.p, which lie ventrally along the anterior-posterior axis. In this example a single overlying somatic cells, the anchor cell, induces nearby VPCs to take on vulva fates 1° (P6.p) and 2° (P5.p and P7.p). VPCs that are not induced form the 3° lineage (P3.p, P4.p and P8.p), which make epidermal cells that fuse to a large syncytial epidermis (see image).
The six VPCs form an equivalence group beca
Document 3:::
Segmentation in biology is the division of some animal and plant body plans into a linear series of repetitive segments that may or may not be interconnected to each other. This article focuses on the segmentation of animal body plans, specifically using the examples of the taxa Arthropoda, Chordata, and Annelida. These three groups form segments by using a "growth zone" to direct and define the segments. While all three have a generally segmented body plan and use a growth zone, they use different mechanisms for generating this patterning. Even within these groups, different organisms have different mechanisms for segmenting the body. Segmentation of the body plan is important for allowing free movement and development of certain body parts. It also allows for regeneration in specific individuals.
Definition
Segmentation is a difficult process to satisfactorily define. Many taxa (for example the molluscs) have some form of serial repetition in their units but are not conventionally thought of as segmented. Segmented animals are those considered to have organs that were repeated, or to have a body composed of self-similar units, but usually it is the parts of an organism that are referred to as being segmented.
Embryology
Segmentation in animals typically falls into three types, characteristic of different arthropods, vertebrates, and annelids. Arthropods such as the fruit fly form segments from a field of equivalent cells based on transcription factor gradients. Vertebrates like the zebrafish use oscillating gene expression to define segments known as somites. Annelids such as the leech use smaller blast cells budded off from large teloblast cells to define segments.
Arthropods
Although Drosophila segmentation is not representative of the arthropod phylum in general, it is the most highly studied. Early screens to identify genes involved in cuticle development led to the discovery of a class of genes that was necessary for proper segmentation of the Drosophila
Document 4:::
Dexiothetism refers to a reorganisation of a clade's bauplan, with right becoming ventral and left becoming dorsal. The organism would then recruit a new left hand side.
Details
If a bilaterally symmetrical ancestor were to become affixed by its right hand side, it would occlude all features on that side. When that organism wanted to become secondarily bilaterally symmetrical again, it would be forced to resculpt its new left and right hand sides from the old left hand side. The end result is a bilaterally symmetrical animal, but with its dorsoventral axis rotated a quarter of a turn.
Implications
Dexiothetism has been implicated in the origin of the unusual embryology of the cephalochordate amphioxus, whereby its gill slits originate on the left hand side and the migrate to the right hand side.
In Jefferies' Calcichordate Theory, he supposes that all chordates and their mitrate ancestors are dexiothetic.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All chordates are deuterostomes possessing a what?
A. notochord
B. endoderm
C. zygote
D. hydrochord
Answer:
|
|
sciq-6864
|
multiple_choice
|
What is the smallest volcanic landform that is formed from accumulation of many small fragments of ejected material?
|
[
"concave cones",
"log cones",
"edifice cones",
"cinder cones"
] |
D
|
Relavent Documents:
Document 0:::
Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region.
Geology
Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago.
Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago.
At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged.
Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum.
Today, the sea floor between these four islands is relatively shallow
Document 1:::
In marine geology, a guyot (), also called a tablemount, is an isolated underwater volcanic mountain (seamount) with a flat top more than below the surface of the sea. The diameters of these flat summits can exceed . Guyots are most commonly found in the Pacific Ocean, but they have been identified in all the oceans except the Arctic Ocean. They are analogous to tables (such as mesas) on land.
History
Guyots were first recognized in 1945 by Harry Hammond Hess, who collected data using echo-sounding equipment on a ship he commanded during World War II. His data showed that some undersea mountains had flat tops. Hess called these undersea mountains "guyots", after the 19th-century geographer Arnold Henry Guyot. Hess postulated they were once volcanic islands that were beheaded by wave action, yet they are now deep under sea level. This idea was used to help bolster the theory of plate tectonics.
Formation
Guyots show evidence of having once been above the surface, with gradual subsidence through stages from fringed reefed mountain, coral atoll, and finally a flat-topped submerged mountain. Seamounts are made by extrusion of lavas piped upward in stages from sources within the Earth's mantle, usually hotspots, to vents on the seafloor. The volcanism invariably ceases after a time, and other processes dominate. When an undersea volcano grows high enough to be near or breach the ocean surface, wave action and/or coral reef growth tend to create a flat-topped edifice. However, all ocean crust and guyots form from hot magma and/or rock, which cools over time. As the lithosphere that the future guyot rides on slowly cools, it becomes denser and sinks lower into Earth's mantle, through the process of isostasy. In addition, the erosive effects of waves and currents are found mostly near the surface: the tops of guyots generally lie below this higher-erosion zone.
This is the same process that gives rise to higher seafloor topography at oceanic ridges, such as the Mid
Document 2:::
The Malpelo Ridge () is an elevated part of Nazca Plate off the Pacific coast of Colombia. It is a faulted chain of volcanic rock of tholeiitic composition. The Malpelo Ridge may have originated simultaneously as Carnegie Ridge, and thus represent an old continuation of Cocos Ridge. It is thought to have acquired it present position due to tectonic movements along the Panama Fracture Zone.
Document 3:::
Darwin Mounds is a large field of undersea sand mounds situated off the north west coast of Scotland that were first discovered in May 1998. They provide a unique habitat for ancient deep water coral reefs and were found using remote sensing techniques during surveys funded by the oil industry and steered by the joint industry and United Kingdom government group the Atlantic Frontier Environment Network (AFEN) (Masson and Jacobs 1998). The mounds were named after the research vessel, itself named for the eminent naturalist and evolutionary theorist Charles Darwin.
The mounds are about below the surface of the North Atlantic ocean, approximately north-west of Cape Wrath, the north-west tip of mainland Scotland. There are hundreds of mounds in the field, which in total cover approximately . Individual mounds are typically circular, up to high and wide. Most of the mounds are also distinguished by the presence of an additional feature referred to as a 'tail'. The tails are of a variable extent and may merge with others, but are generally a teardrop shape and are orientated south-west of the mound. The mound-tail feature of the Darwin Mounds is apparently unique globally.
Composition
The mounds are mostly sand, currently interpreted as "sand volcanoes". These features are caused when fluidised sand "de-waters" and the fluid bubbles up through the sand, pushing the sediment up into a cone shape. Sand volcanoes are common in the Devonian fossil record in UK, and in seismically active areas of the planet. In this case, tectonic activity is unlikely; some form of slumping on the south-west side of the undersea (Wyville-Thomson) Ridge being a more likely cause. The tops of the mounds have living stands of Lophelia and blocky rubble (interpreted as coral debris). The mounds provide one of the largest known northerly cold-water habitats for coral species. The mounds are also unusual in that Lophelia pertusa, a cold water coral, appears to be growing on sand rather than a
Document 4:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the smallest volcanic landform that is formed from accumulation of many small fragments of ejected material?
A. concave cones
B. log cones
C. edifice cones
D. cinder cones
Answer:
|
|
sciq-2675
|
multiple_choice
|
What is the process of getting oxygen into the body & releasing carbon dioxide called?
|
[
"perspiration",
"precipitation",
"respiration",
"photosynthesis"
] |
C
|
Relavent Documents:
Document 0:::
Exhalation (or expiration) is the flow of the breath out of an organism. In animals, it is the movement of air from the lungs out of the airways, to the external environment during breathing.
This happens due to elastic properties of the lungs, as well as the internal intercostal muscles which lower the rib cage and decrease thoracic volume. As the thoracic diaphragm relaxes during exhalation it causes the tissue it has depressed to rise superiorly and put pressure on the lungs to expel the air. During forced exhalation, as when blowing out a candle, expiratory muscles including the abdominal muscles and internal intercostal muscles generate abdominal and thoracic pressure, which forces air out of the lungs.
Exhaled air is 4% carbon dioxide, a waste product of cellular respiration during the production of energy, which is stored as ATP. Exhalation has a complementary relationship to inhalation which together make up the respiratory cycle of a breath.
Exhalation and gas exchange
The main reason for exhalation is to rid the body of carbon dioxide, which is the waste product of gas exchange in humans. Air is brought into the body through inhalation. During this process air is taken in by the lungs. Diffusion in the alveoli allows for the exchange of O2 into the pulmonary capillaries and the removal of CO2 and other gases from the pulmonary capillaries to be exhaled. In order for the lungs to expel air the diaphragm relaxes, which pushes up on the lungs. The air then flows through the trachea then through the larynx and pharynx to the nasal cavity and oral cavity where it is expelled out of the body. Exhalation takes longer than inhalation and it is believed to facilitate better exchange of gases. Parts of the nervous system help to regulate respiration in humans. The exhaled air is not just carbon dioxide; it contains a mixture of other gases. Human breath contains volatile organic compounds (VOCs). These compounds consist of methanol, isoprene, acetone,
Document 1:::
In respiratory physiology, the oxygen cascade describes the flow of oxygen from air to mitochondria, where it is consumed in aerobic respiration to release energy. Oxygen flows from areas with high partial pressure of oxygen (PO2, also known as oxygen tension) to areas of lower PO2.
Air is typically around 21% oxygen, and at sea level, the PO2 of air is typically around 159 mmHg. Humidity dilutes the concentration of oxygen in air. As air is inhaled into the lungs, it mixes with water and exhaust gasses including CO2, further diluting the oxygen concentration and lowering the PO2. As oxygen continues to flow down the concentration gradient from areas of higher concentration to areas of lower concentration, it must pass through barriers such as the alveoli walls, capillary walls, capillary blood plasma, red blood cell membrane, interstitial space, other cell membranes, and cell cytoplasm. The partial pressure of oxygen drops across each barrier.
Table
Table 1 gives the example of a typical oxygen cascade for skeletal muscle of a healthy, adult male at rest who is breathing air at atmospheric pressure at sea level. Actual values in a person may vary widely due to ambient conditions, health status, tissue type, and metabolic demands.
See also
Alveolar–arterial gradient
Alveolar gas equation
Blood gas tension
Document 2:::
Breathing (spiration or ventilation) is the process of moving air into and from the lungs to facilitate gas exchange with the internal environment, mostly to flush out carbon dioxide and bring in oxygen.
All aerobic creatures need oxygen for cellular respiration, which extracts energy from the reaction of oxygen with molecules derived from food and produces carbon dioxide as a waste product. Breathing, or external respiration, brings air into the lungs where gas exchange takes place in the alveoli through diffusion. The body's circulatory system transports these gases to and from the cells, where cellular respiration takes place.
The breathing of all vertebrates with lungs consists of repetitive cycles of inhalation and exhalation through a highly branched system of tubes or airways which lead from the nose to the alveoli. The number of respiratory cycles per minute is the breathing or respiratory rate, and is one of the four primary vital signs of life. Under normal conditions the breathing depth and rate is automatically, and unconsciously, controlled by several homeostatic mechanisms which keep the partial pressures of carbon dioxide and oxygen in the arterial blood constant. Keeping the partial pressure of carbon dioxide in the arterial blood unchanged under a wide variety of physiological circumstances, contributes significantly to tight control of the pH of the extracellular fluids (ECF). Over-breathing (hyperventilation) and under-breathing (hypoventilation), which decrease and increase the arterial partial pressure of carbon dioxide respectively, cause a rise in the pH of ECF in the first case, and a lowering of the pH in the second. Both cause distressing symptoms.
Breathing has other important functions. It provides a mechanism for speech, laughter and similar expressions of the emotions. It is also used for reflexes such as yawning, coughing and sneezing. Animals that cannot thermoregulate by perspiration, because they lack sufficient sweat glands, may
Document 3:::
Excretion is a process in which metabolic waste
is eliminated from an organism. In vertebrates this is primarily carried out by the lungs, kidneys, and skin. This is in contrast with secretion, where the substance may have specific tasks after leaving the cell. Excretion is an essential process in all forms of life. For example, in mammals, urine is expelled through the urethra, which is part of the excretory system. In unicellular organisms, waste products are discharged directly through the surface of the cell.
During life activities such as cellular respiration, several chemical reactions take place in the body. These are known as metabolism. These chemical reactions produce waste products such as carbon dioxide, water, salts, urea and uric acid. Accumulation of these wastes beyond a level inside the body is harmful to the body. The excretory organs remove these wastes. This process of removal of metabolic waste from the body is known as excretion.
Green plants excrete carbon dioxide and water as respiratory products. In green plants, the carbon dioxide released during respiration gets used during photosynthesis. Oxygen is a by product generated during photosynthesis, and exits through stomata, root cell walls, and other routes. Plants can get rid of excess water by transpiration and guttation. It has been shown that the leaf acts as an 'excretophore' and, in addition to being a primary organ of photosynthesis, is also used as a method of excreting toxic wastes via diffusion. Other waste materials that are exuded by some plants — resin, saps, latex, etc. are forced from the interior of the plant by hydrostatic pressures inside the plant and by absorptive forces of plant cells. These latter processes do not need added energy, they act passively. However, during the pre-abscission phase, the metabolic levels of a leaf are high. Plants also excrete some waste substances into the soil around them.
In animals, the main excretory products are carbon dioxide, ammoni
Document 4:::
In physiology, respiration is the movement of oxygen from the outside environment to the cells within tissues, and the removal of carbon dioxide in the opposite direction that's to the environment.
The physiological definition of respiration differs from the biochemical definition, which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) by oxidizing nutrients and releasing waste products. Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the diffusion and transport of metabolites between the organism and the external environment.
Gas exchanges in the lung occurs by ventilation and perfusion. Ventilation refers to the in and out movement of air of the lungs and perfusion is the circulation of blood in the pulmonary capillaries. In mammals, physiological respiration involves respiratory cycles of inhaled and exhaled breaths. Inhalation (breathing in) is usually an active movement that brings air into the lungs where the process of gas exchange takes place between the air in the alveoli and the blood in the pulmonary capillaries. Contraction of the diaphragm muscle cause a pressure variation, which is equal to the pressures caused by elastic, resistive and inertial components of the respiratory system. In contrast, exhalation (breathing out) is usually a passive process, though there are many exceptions: when generating functional overpressure (speaking, singing, humming, laughing, blowing, snorting, sneezing, coughing, powerlifting); when exhaling underwater (swimming, diving); at high levels of physiological exertion (running, climbing, throwing) where more rapid gas exchange is necessitated; or in some forms of breath-controlled meditation. Speaking and singing in humans requires sustained breath control that many mammals are not
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process of getting oxygen into the body & releasing carbon dioxide called?
A. perspiration
B. precipitation
C. respiration
D. photosynthesis
Answer:
|
|
sciq-5944
|
multiple_choice
|
How is sediment transported?
|
[
"currents",
"landslides",
"storms",
"winds"
] |
A
|
Relavent Documents:
Document 0:::
Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Mechanisms
Aeolian
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion
Document 1:::
In oceanography, terrigenous sediments are those derived from the erosion of rocks on land; that is, they are derived from terrestrial (as opposed to marine) environments. Consisting of sand, mud, and silt carried to sea by rivers, their composition is usually related to their source rocks; deposition of these sediments is largely limited to the continental shelf.
Sources of terrigenous sediments include volcanoes, weathering of rocks, wind-blown dust, grinding by glaciers, and sediment carried by rivers or icebergs.
Terrigenous sediments are responsible for a significant amount of the salt in today's oceans. Over time rivers continue to carry minerals to the ocean but when water evaporates, it leaves the minerals behind. Since chlorine and sodium are not consumed by biological processes, these two elements constitute the greatest portion of dissolved minerals.
Quantity
Some 1.35 billion tons, or 8% of global river-borne sediment (16.5-17 billion tons globally), is transported by Ganges-Brahmaputra river system annually according to decades old studies, it is unquantified how much variance year to year as well as the impact modern humans have on this amount by holding back sediment in dams, counteracted with increased development of erosion patterns. Wind born sediment also transports billions of tons annually, most prominent in Saharan dust, but thought to be substantially less than rivers; again, variance of year to year and human impacts of land use remain unquantified on this data. It is well known terrain influences climate conditions, and erosive processes slowly but surely modify terrain along with tectonic causes, but all encompassing studies have been lacking on a global scale to understand how these shape of land and sea factors fit in with both human induced climate change and natural geo-astrological climate variability.
See also
Pelagic sediments
Biogenous Ooze
Document 2:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
Document 3:::
Marine sediment, or ocean sediment, or seafloor sediment, are deposits of insoluble particles that have accumulated on the seafloor. These particles have their origins in soil and rocks and have been transported from the land to the sea, mainly by rivers but also by dust carried by wind and by the flow of glaciers into the sea. Additional deposits come from marine organisms and chemical precipitation in seawater, as well as from underwater volcanoes and meteorite debris.
Except within a few kilometres of a mid-ocean ridge, where the volcanic rock is still relatively young, most parts of the seafloor are covered in sediment. This material comes from several different sources and is highly variable in composition. Seafloor sediment can range in thickness from a few millimetres to several tens of kilometres. Near the surface seafloor sediment remains unconsolidated, but at depths of hundreds to thousands of metres the sediment becomes lithified (turned to rock).
Rates of sediment accumulation are relatively slow throughout most of the ocean, in many cases taking thousands of years for any significant deposits to form. Sediment transported from the land accumulates the fastest, on the order of one metre or more per thousand years for coarser particles. However, sedimentation rates near the mouths of large rivers with high discharge can be orders of magnitude higher. Biogenous oozes accumulate at a rate of about one centimetre per thousand years, while small clay particles are deposited in the deep ocean at around one millimetre per thousand years.
Sediments from the land are deposited on the continental margins by surface runoff, river discharge, and other processes. Turbidity currents can transport this sediment down the continental slope to the deep ocean floor. The deep ocean floor undergoes its own process of spreading out from the mid-ocean ridge, and then slowly subducts accumulated sediment on the deep floor into the molten interior of the earth. In turn, molt
Document 4:::
Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety.
Education and training
According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians.
The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment.
As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines.
Relationship to commerce
Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How is sediment transported?
A. currents
B. landslides
C. storms
D. winds
Answer:
|
|
sciq-1580
|
multiple_choice
|
What occurs before the endometrium thickens in estrous cycles?
|
[
"fertilization",
"copulation",
"ovulation",
"pregnancy"
] |
C
|
Relavent Documents:
Document 0:::
The corpus albicans (Latin for "whitening body"; also known as atretic corpus luteum, corpus candicans, or simply as albicans) is the regressed form of the corpus luteum. As the corpus luteum is being broken down by macrophages, fibroblasts lay down type I collagen, forming the corpus albicans. This process is called "luteolysis". The remains of the corpus albicans may persist as a scar on the surface of the ovary.
Background
During the first few hours after expulsion of the ovum from the follicle, the remaining granulosa and theca interna cells change rapidly into lutein cells. They enlarge in diameter two or more times and become filled with lipid inclusions that give them a yellowish appearance.
This process is called luteinization, and the total mass of cells together is called the corpus luteum. A well-developed vascular supply also grows into the corpus luteum.
The granulosa cells in the corpus luteum develop extensive intracellular smooth endoplasmic reticula that form large amounts of the female sex hormones progesterone and estrogen (more progesterone than estrogen during the luteal phase). The theca cells form mainly the androgens androstenedione and testosterone. These hormones may then be converted by aromatase in the granulosa cells into estrogens, including estradiol.
The corpus luteum normally grows to about 1.5 centimeters in diameter, reaching this stage of development 7 to 8 days after ovulation. Then it begins to involute and eventually loses its secretory function and its yellowish, lipid characteristic about 12 days after ovulation, becoming the corpus albicans. In the ensuing weeks, this is replaced by connective tissue and over months is reabsorbed.
Document 1:::
Menstruation is the shedding of the uterine lining (endometrium). It occurs on a regular basis in uninseminated sexually reproductive-age females of certain mammal species.
Although there is some disagreement in definitions between sources, menstruation is generally considered to be limited to primates. Overt menstruation (where there is bleeding from the uterus through the vagina) is found primarily in humans and close relatives such as chimpanzees. It is common in simians (Old World monkeys, New World monkeys, and apes), but completely lacking in strepsirrhine primates and possibly weakly present in tarsiers. Beyond primates, it is known only in bats, the elephant shrew, and the spiny mouse species Acomys cahirinus.
Females of other species of placental mammal undergo estrous cycles, in which the endometrium is completely reabsorbed by the animal (covert menstruation) at the end of its reproductive cycle. Many zoologists regard this as different from a "true" menstrual cycle. Female domestic animals used for breeding—for example dogs, pigs, cattle, or horses—are monitored for physical signs of an estrous cycle period, which indicates that the animal is ready for insemination.
Estrus and menstruation
Females of most mammal species advertise fertility to males with visual behavioral cues, pheromones, or both. This period of advertised fertility is known as oestrus, "estrus" or heat. In species that experience estrus, females are generally only receptive to copulation while they are in heat (dolphins are an exception). In the estrous cycles of most placental mammals, if no fertilization takes place, the uterus reabsorbs the endometrium. This breakdown of the endometrium without vaginal discharge is sometimes called covert menstruation. Overt menstruation (where there is blood flow from the vagina) occurs primarily in humans and close evolutionary relatives such as chimpanzees. Some species, such as domestic dogs, experience small amounts of vaginal bleeding
Document 2:::
Endometrial cups form during pregnancy in mares and are the source of equine chorionic gonadotropin (eCG) and a placenta-associated structure, which is derived from the fetus. Their purpose is to increase the immunological tolerance of the mare in order to protect the developing foal.
Function
Endometrial cups are unique to animals in the horse family, and so named because of their concave shape. They are a placenta-associated structure, found in the uterine wall of a mare from about 38 to 150 days into a pregnancy. After about 70 days, they begin to regress, and are eventually destroyed by the immune system. They begin to develop at approximately 25 days of pregnancy, deriving from the chorionic girdle. At approximately 36–38 days of pregnancy, the cells that will become the endometrial cup begin to burrow into the endometrial tissue, through the basement membrane, and into the uterine stroma. Their invasion of the uterine stroma begins the cells' maturation process, which takes 2–3 days. Endometrial cups can be circular, U-shaped, or ribbonlike and are pale compared to the rest of the endometrial tissue. They can range in size from 1 cm to 10 cm in diameter at the widest point. They resemble ulcers in form, and when examined under a microscope have large epithelioid decidual-like cells and large nucleoli.
They produce high concentrations of equine chorionic gonadotropin (eCG), also called pregnant mare's serum gonadotropin, in the bloodstream of pregnant mares. eCG is actually an equine luteinizing hormone. Endometrial cups behave somewhat like cells from metastatic tumors, in that they leave the placenta and migrate into the uterus. Their purpose appears to be to work with other placental cells to control the expression of histocompatibility genes so that the developing fetus is not destroyed by the mare's immune system.
Similar types of cells that invade the placenta have been described in humans. The purpose of these cells In both humans and horses is beli
Document 3:::
The follicular phase, also known as the preovulatory phase or proliferative phase, is the phase of the estrous cycle (or, in primates for example, the menstrual cycle) during which follicles in the ovary mature from primary follicle to a fully mature graafian follicle. It ends with ovulation. The main hormones controlling this stage are secretion of gonadotropin-releasing hormones, which are follicle-stimulating hormones and luteinising hormones. They are released by pulsatile secretion. The duration of the follicular phase can differ depending on the length of the menstrual cycle, while the luteal phase is usually stable, does not really change and lasts 14 days.
Hormonal events
Protein secretion
Due to the increase of FSH, the protein inhibin B will be secreted by the granulosa cells. Inhibin B will eventually blunt the secretion of FSH toward the end of the follicular phase. Inhibin B levels will be highest during the LH surge before ovulation and will quickly decrease after.
Follicle recruitment
Follicle-stimulating hormone (FSH) is secreted by the anterior pituitary gland (Figure 2). FSH secretion begins to rise in the last few days of the previous menstrual cycle, and is the highest and most important during the first week of the follicular phase (Figure 1). The rise in FSH levels recruits five to seven tertiary-stage ovarian follicles (this stage follicle is also known as a Graafian follicle or antral follicle) for entry into the menstrual cycle. These follicles, that have been growing for the better part of a year in a process known as folliculogenesis, compete with each other for dominance.
FSH induces the proliferation of granulosa cells in the developing follicles, and the expression of luteinizing hormone (LH) receptors on these granulosa cells (Figure 1). Under the influence of FSH, aromatase and p450 enzymes are activated, causing the granulosa cells to begin to secrete estrogen. This increased level of estrogen stimulates production of gonadotrop
Document 4:::
The germinal epithelium is the epithelial layer of the seminiferous tubules of the testicles. It is also known as the wall of the seminiferous tubules. The cells in the epithelium are connected via tight junctions.
There are two types of cells in the germinal epithelium. The large Sertoli cells (which are not dividing) function as supportive cells to the developing sperm. The second cell type are the cells belonging to the spermatogenic cell lineage. These develop to eventually become sperm cells (spermatozoon). Typically, the spermatogenic cells will make four to eight layers in the germinal epithelium.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs before the endometrium thickens in estrous cycles?
A. fertilization
B. copulation
C. ovulation
D. pregnancy
Answer:
|
|
sciq-2831
|
multiple_choice
|
What is responsible for the development of antibiotic-resistant strains of bacteria?
|
[
"flu shots",
"mosquitoes",
"negative mutations",
"bacterial mutations"
] |
D
|
Relavent Documents:
Document 0:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 1:::
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Education
Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti
Document 2:::
A microbiologist (from Greek ) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.
For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also not include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual proper
Document 3:::
The Investigative Biology Teaching Laboratories are located at Cornell University on the first floor Comstock Hall. They are well-equipped biology teaching laboratories used to provide hands-on laboratory experience to Cornell undergraduate students. Currently, they are the home of the Investigative Biology Laboratory Course, (BioG1500), and frequently being used by the Cornell Institute for Biology Teachers, the Disturbance Ecology course and Insectapalooza. In the past the Investigative Biology Teaching Laboratories hosted the laboratory portion of the Introductory Biology Course with the course number of Bio103-104 (renumbered to BioG1103-1104).
The Investigative Biology Teaching Laboratories house the Science Communication and Public Engagement Undergraduate Minor.
History
Bio103-104
BioG1103-1104 Biological Sciences Laboratory course was a two-semester, two-credit course. BioG1103 was offered in the spring, while 1104 was offered in the fall.
BioG1500
This course was first offered in Fall 2010. It is a one semester course, offered in the Fall, Spring and Summer for 2 credits. One credit is being awarded for the letter and one credit for the three-hour-long lab, following the SUNY system.
Document 4:::
Bacteriology is the branch and specialty of biology that studies the morphology, ecology, genetics and biochemistry of bacteria as well as many other aspects related to them. This subdivision of microbiology involves the identification, classification, and characterization of bacterial species. Because of the similarity of thinking and working with microorganisms other than bacteria, such as protozoa, fungi, and viruses, there has been a tendency for the field of bacteriology to extend as microbiology. The terms were formerly often used interchangeably. However, bacteriology can be classified as a distinct science.
Overview
Definition
Bacteriology is the study of bacteria and their relation to medicine. Bacteriology evolved from physicians needing to apply the germ theory to address the concerns relating to disease spreading in hospitals the 19th century. Identification and characterizing of bacteria being associated to diseases led to advances in pathogenic bacteriology. Koch's postulates played a role into identifying the relationships between bacteria and specific diseases. Since then, bacteriology has played a role in successful advances in science such as bacterial vaccines like diphtheria toxoid and tetanus toxoid. Bacteriology can be studied and applied in many sub-fields relating to agriculture, marine biology, water pollution, bacterial genetics, veterinary medicine, biotechnology and others.
Bacteriologists
A bacteriologist is a microbiologist or other trained professional in bacteriology. Bacteriologists are interested in studying and learning about bacteria, as well as using their skills in clinical settings. This includes investigating properties of bacteria such as morphology, ecology, genetics and biochemistry, phylogenetics, genomics and many other areas related to bacteria like disease diagnostic testing. They can also work as medical scientists, veterinary scientists, or diagnostic technicians in locations like clinics, blood banks, hospitals
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is responsible for the development of antibiotic-resistant strains of bacteria?
A. flu shots
B. mosquitoes
C. negative mutations
D. bacterial mutations
Answer:
|
|
sciq-1758
|
multiple_choice
|
Because plants lack what kind of system, their first line of defense is usually the death of cells surrounding infected tissue to prevent spread of infection?
|
[
"digestion system",
"immune system",
"hormones system",
"nervous system"
] |
B
|
Relavent Documents:
Document 0:::
Plant Physiology is a monthly peer-reviewed scientific journal that covers research on physiology, biochemistry, cellular and molecular biology, genetics, biophysics, and environmental biology of plants. The journal has been published since 1926 by the American Society of Plant Biologists. The current editor-in-chief is Yunde Zhao (University of California San Diego. According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.005.
Document 1:::
Plants are constantly exposed to different stresses that result in wounding. Plants have adapted to defend themselves against wounding events, like herbivore attacks or environmental stresses. There are many defense mechanisms that plants rely on to help fight off pathogens and subsequent infections. Wounding responses can be local, like the deposition of callose, and others are systemic, which involve a variety of hormones like jasmonic acid and abscisic acid.
Overview
There are many forms of defense that plants use to respond to wounding events. There are physical defense mechanisms that some plants utilize, through structural components, like lignin and the cuticle. The structure of a plant cell wall is incredibly important for wound responses, as both protect the plant from pathogenic infections by preventing various molecules from entering the cell.
Plants are capable of activating innate immunity, by responding to wounding events with damage-associated Molecular Patterns (DAMPs). Additionally, plants rely on microbe-associated molecular patterns (MAMPs) to defend themselves upon sensing a wounding event. There are examples of both rapid and delayed wound responses, depending on where the damage took place.
MAMPs/ DAMPS & Signaling Pathways
Plants have pattern recognition receptors (PRRs) that recognize MAMPs, or microbe-associated molecular patterns. Upon entry of a pathogen, plants are vulnerable to infection and lose a fair amount of nutrients to said pathogen. The constitutive defenses are the physical barriers of the plant; including the cuticle or even the metabolites that act toxic and deter herbivores. Plants maintain an ability to sense when they have an injured area and induce a defensive response. Within wounded tissues, endogenous molecules become released and become Damage Associated Molecular Patterns (DAMPs), inducing a defensive response. DAMPs are typically caused by insects that feed off the plant. Such responses to wounds are found at th
Document 2:::
Induced Systemic Resistance (ISR) is a resistance mechanism in plants that is activated by infection. Its mode of action does not depend on direct killing or inhibition of the invading pathogen, but rather on increasing physical or chemical barrier of the host plant. Like the Systemic Acquired Resistance (SAR) a plant can develop defenses against an invader such as a pathogen or parasite if an infection takes place. In contrast to SAR which is triggered by the accumulation of salicylic acid, ISR instead relies on signal transduction pathways activated by jasmonate and ethylene.
Discovery
The induction of plant-induced resistance to pathogen protection was identified in 1901 and was described as the "system of acquired resistance." Subsequently, several different terms have been used, namely, "acquired physiological immunity", "resistance displacement", "plant immune function" and "induced system resistance." Many forms of stimulus have been found to induce the plant to the virus, bacteria and fungi and other disease resistance including Mechanical factors (dry ice damage, electromagnetic, ultraviolet, and low temperature and high temperature treatment, etc.) Chemical factors (heavy metal salts, water, salicylic acid) and Biological factors (fungi, bacteria, viruses, and their metabolites).
Mode of action
Induced resistance of plants has 2 major modes of action: the SAR pathway and the ISR pathway. SAR can elicit a rapid local reaction, or hypersensitive response, the pathogen is limited to a small area of the site of infection. As mentioned, salicylic acid is the mode of action for the SAR pathway. ISR enhances the defense systems of the plant by jasmonic acid (JA) mode of action. Both act on the effect of the NPR-1, but SAR utilizes PR genes. It is important to note that the two mediated responses have regulatory effects on one another. As SA goes up, it can inhibit the effect of JA. There is a balance to be maintained when activating both responses.
ISR res
Document 3:::
Plant defense against herbivory or host-plant resistance (HPR) is a range of adaptations evolved by plants which improve their survival and reproduction by reducing the impact of herbivores. Plants can sense being touched, and they can use several strategies to defend against damage caused by herbivores. Many plants produce secondary metabolites, known as allelochemicals, that influence the behavior, growth, or survival of herbivores. These chemical defenses can act as repellents or toxins to herbivores or reduce plant digestibility. Another defensive strategy of plants is changing their attractiveness. To prevent overconsumption by large herbivores, plants alter their appearance by changing their size or quality, reducing the rate at which they are consumed.
Other defensive strategies used by plants include escaping or avoiding herbivores at any time in any placefor example, by growing in a location where plants are not easily found or accessed by herbivores or by changing seasonal growth patterns. Another approach diverts herbivores toward eating non-essential parts or enhances the ability of a plant to recover from the damage caused by herbivory. Some plants encourage the presence of natural enemies of herbivores, which in turn protect the plant. Each type of defense can be either constitutive (always present in the plant) or induced (produced in reaction to damage or stress caused by herbivores).
Historically, insects have been the most significant herbivores, and the evolution of land plants is closely associated with the evolution of insects. While most plant defenses are directed against insects, other defenses have evolved that are aimed at vertebrate herbivores, such as birds and mammals. The study of plant defenses against herbivory is important, not only from an evolutionary viewpoint, but also for the direct impact that these defenses have on agriculture, including human and livestock food sources; as beneficial 'biological control agents' in biologica
Document 4:::
Hypersensitive response (HR) is a mechanism used by plants to prevent the spread of infection by microbial pathogens. HR is characterized by the rapid death of cells in the local region surrounding an infection and it serves to restrict the growth and spread of pathogens to other parts of the plant. It is analogous to the innate immune system found in animals, and commonly precedes a slower systemic (whole plant) response, which ultimately leads to systemic acquired resistance (SAR). HR can be observed in the vast majority of plant species and is induced by a wide range of plant pathogens such as oomycetes, viruses, fungi and even insects.
HR is commonly thought of as an effective defence strategy against biotrophic plant pathogens, which require living tissue to gain nutrients. In the case of necrotrophic pathogens, HR might even be beneficial to the pathogen, as they require dead plant cells to obtain nutrients. The situation becomes complicated when considering pathogens such as Phytophthora infestans which at the initial stages of the infection act as biotrophs but later switch to a necrotrophic lifestyle. It is proposed that in this case HR might be beneficial in the early stages of the infection but not in the later stages.
Genetics
The first idea of how the hypersensitive response occurs came from Harold Henry Flor's gene-for-gene model. He postulated that for every resistance (R) gene encoded by the plant, there is a corresponding avirulence (Avr) gene encoded by the microbe. The plant is resistant to the pathogen if both the Avr and R genes are present during the plant-pathogen interaction. The genes that are involved in the plant-pathogen interactions tend to evolve at a very rapid rate.
Very often, the resistance mediated by R genes is due to them inducing HR, which leads to apoptosis. Most plant R genes encode NOD-like receptor (NLR) proteins. NLR protein domain architecture consists of an NB-ARC domain which is a nucleotide-binding domain, responsi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because plants lack what kind of system, their first line of defense is usually the death of cells surrounding infected tissue to prevent spread of infection?
A. digestion system
B. immune system
C. hormones system
D. nervous system
Answer:
|
|
sciq-10594
|
multiple_choice
|
How many atoms are evenly organized around a central atom?
|
[
"four",
"seven",
"six",
"five"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many atoms are evenly organized around a central atom?
A. four
B. seven
C. six
D. five
Answer:
|
|
sciq-314
|
multiple_choice
|
What term describes how closely packed the particles of matter are?
|
[
"density",
"space",
"mass",
"range"
] |
A
|
Relavent Documents:
Document 0:::
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.
Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space.
For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea tha
Document 1:::
In physics and mechanics, mass distribution is the spatial distribution of mass within a solid body. In principle, it is relevant also for gases or liquids, but on Earth their mass distribution is almost homogeneous.
Astronomy
In astronomy mass distribution has decisive influence on the development e.g. of nebulae, stars and planets.
The mass distribution of a solid defines its center of gravity and influences its dynamical behaviour - e.g. the oscillations and eventual rotation.
Mathematical modelling
A mass distribution can be modeled as a measure. This allows point masses, line masses, surface masses, as well as masses given by a volume density function. Alternatively the latter can be generalized to a distribution. For example, a point mass is represented by a delta function defined in 3-dimensional space. A surface mass on a surface given by the equation may be represented by a density distribution , where is the mass per unit area.
The mathematical modelling can be done by potential theory, by numerical methods (e.g. a great number of mass points), or by theoretical equilibrium figures.
Geology
In geology the aspects of rock density are involved.
Rotating solids
Rotating solids are affected considerably by the mass distribution, either if they are homogeneous or inhomogeneous - see Torque, moment of inertia, wobble, imbalance and stability.
See also
Bouguer plate
Gravity
Mass function
Mass concentration (astronomy)
External links
Mass distribution of the Earth
Mechanics
Celestial mechanics
Geophysics
Mass
Document 2:::
Absolute molar mass is a process used to determine the characteristics of molecules.
History
The first absolute measurements of molecular weights (i.e. made without reference to standards) were based on fundamental physical characteristics and their relation to the molar mass. The most useful of these were membrane osmometry and sedimentation.
Another absolute instrumental approach was also possible with the development of light scattering theory by Albert Einstein, Chandrasekhara Venkata Raman, Peter Debye, Bruno H. Zimm, and others. The problem with measurements made using membrane osmometry and sedimentation was that they only characterized the bulk properties of the polymer sample. Moreover, the measurements were excessively time consuming and prone to operator error. In order to gain information about a polydisperse mixture of molar masses, a method for separating the different sizes was developed. This was achieved by the advent of size exclusion chromatography (SEC). SEC is based on the fact that the pores in the packing material of chromatography columns could be made small enough for molecules to become temporarily lodged in their interstitial spaces. As the sample makes its way through a column the smaller molecules spend more time traveling in these void spaces than the larger ones, which have fewer places to "wander". The result is that a sample is separated according to its hydrodynamic volume . As a consequence, the big molecules come out first, and then the small ones follow in the eluent. By choosing a suitable column packing material it is possible to define the resolution of the system. Columns can also be combined in series to increase resolution or the range of sizes studied.
The next step is to convert the time at which the samples eluted into a measurement of molar mass. This is possible because if the molar mass of a standard were known, the time at which this standard eluted should be equal to a specific molar mass. Using multiple
Document 3:::
Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen.
vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas .
vapour density = molar mass of gas / molar mass of H2
vapour density = molar mass of gas / 2.016
vapour density = × molar mass
(and thus: molar mass = ~2 × vapour density)
For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity.
Alternative definition
In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2.
With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space.
See also
Relative density (also known as specific gravity)
Victor Meyer apparatus
Document 4:::
A point particle, ideal particle or point-like particle (often spelled pointlike particle) is an idealization of particles heavily used in physics. Its defining feature is that it lacks spatial extension; being dimensionless, it does not take up space. A point particle is an appropriate representation of any object whenever its size, shape, and structure are irrelevant in a given context. For example, from far enough away, any finite-size object will look and behave as a point-like object. Point masses and point charges, discussed below, are two common cases. When a point particle has an additive property, such as mass or charge, it is often represented mathematically by a Dirac delta function.
In quantum mechanics, the concept of a point particle is complicated by the Heisenberg uncertainty principle, because even an elementary particle, with no internal structure, occupies a nonzero volume. For example, the atomic orbit of an electron in the hydrogen atom occupies a volume of ~. There is nevertheless a distinction between elementary particles such as electrons or quarks, which have no known internal structure, versus composite particles such as protons, which do have internal structure: A proton is made of three quarks.
Elementary particles are sometimes called "point particles" in reference to their lack of internal structure, but this is in a different sense than discussed above.
Point mass
Point mass (pointlike mass) is the concept, for example in classical physics, of a physical object (typically matter) that has nonzero mass, and yet explicitly and specifically is (or is being thought of or modeled as) infinitesimal (infinitely small) in its volume or linear dimensions.
In the theory of gravity, extended objects can behave as point-like even in their immediate vicinity. For example, spherical objects interacting in 3-dimensional space whose interactions are described by the Newtonian gravitation behave in such a way as if all their matter were concentrate
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term describes how closely packed the particles of matter are?
A. density
B. space
C. mass
D. range
Answer:
|
|
sciq-7031
|
multiple_choice
|
Viruses depend on what type of cells?
|
[
"host cells",
"anchor cells",
"blood cells",
"immune system cells"
] |
A
|
Relavent Documents:
Document 0:::
A virus is a submicroscopic infectious agent that replicates only inside the living cells of an organism. Viruses infect all life forms, from animals and plants to microorganisms, including bacteria and archaea. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity. Since Dmitri Ivanovsky's 1892 article describing a non-bacterial pathogen infecting tobacco plants and the discovery of the tobacco mosaic virus by Martinus Beijerinck in 1898, more than 11,000 of the millions of virus species have been described in detail. The study of viruses is known as virology, a subspeciality of microbiology.
When infected, a host cell is often forced to rapidly produce thousands of copies of the original virus. When not inside an infected cell or in the process of infecting a cell, viruses exist in the form of independent viral particles, or virions, consisting of (i) genetic material, i.e., long molecules of DNA or RNA that encode the structure of the proteins by which the virus acts; (ii) a protein coat, the capsid, which surrounds and protects the genetic material; and in some cases (iii) an outside envelope of lipids. The shapes of these virus particles range from simple helical and icosahedral forms to more complex structures. Most virus species have virions too small to be seen with an optical microscope and are one-hundredth the size of most bacteria.
The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Viruses are considered by some biologists to be a life form, because they carry genetic material, reproduce, and evolve through natural selection, although they lack the key characteristics, such as cell structure, that are generally
Document 1:::
Viral dynamics is a field of applied mathematics concerned with describing the progression of viral infections within a host organism. It employs a family of mathematical models that describe changes over time in the populations of cells targeted by the virus and the viral load. These equations may also track competition between different viral strains and the influence of immune responses. The original viral dynamics models were inspired by compartmental epidemic models (e.g. the SI model), with which they continue to share many common mathematical features, such as the concept of the basic reproductive ratio (R0). The major distinction between these fields is in the scale at which the models operate: while epidemiological models track the spread of infection between individuals within a population (i.e. "between host"), viral dynamics models track the spread of infection between cells within an individual (i.e. "within host"). Analyses employing viral dynamic models have been used extensively to study HIV, hepatitis B virus, and hepatitis C virus, among other infections
Document 2:::
This is a list of Immune cells, also known as white blood cells, white cells, leukocytes, or leucocytes. They are cells involved in protecting the body against both infectious disease and foreign invaders.
Document 3:::
Virophysics is a branch of biophysics in which the theoretical concepts and experimental techniques of physics are applied to study the mechanics and dynamics driving the interactions between virions and cells.
Overview
Research in virophysics typically focuses on resolving the physical structure and structural properties of viruses, the dynamics of their assembly and disassembly, their population kinetics over the course of an infection, and the emergence and evolution of various strains. The common aim of these efforts is to establish a set of models (expressions or laws) that quantitatively describe the details of all processes involved in viral infections with reliable predictive power. Having such a quantitative understanding of viruses would not only rationalize the development of strategies to prevent, guide, or control the course of viral infections, but could also be used to exploit virus processes and put virus to work in areas such as nanosciences, materials, and biotechnologies.
Traditionally, in vivo and in vitro experimentation has been the only way to study viral infections. This approach for deriving knowledge based solely on experimental observations relies on common-sense assumptions (e.g., a higher virus count means a fitter virus). These assumptions often go untested due to difficulties controlling individual components of these complex systems without affecting others. The use of mathematical models and computer simulations to describe such systems, however, makes it possible to deconstruct an experimental system into individual components and determine how the pieces combine to create the infection we observe.
Virophysics has large overlaps with other fields. For example, the modelling of infectious disease dynamics is a popular research topic in mathematics, notably in applied mathematics or mathematical biology. While most modelling efforts in mathematics have focused on elucidating the dynamics of spread of infectious diseases at an epid
Document 4:::
Host tropism is the infection specificity of certain pathogens to particular hosts and host tissues. This explains why most pathogens are only capable of infecting a limited range of host organisms.
Researchers can classify pathogenic organisms by the range of species and cell types that they exhibit host tropism for. For instance, pathogens that are able to infect a wide range of hosts and tissues are said to be amphotropic. Ecotropic pathogens, on the other hand, are only capable of infecting a narrow range of hosts and host tissue. Knowledge of a pathogen's host specificity allows professionals in the research and medical industries to model pathogenesis and develop vaccines, medication, and preventive measures to fight against infection. Methods such as cell engineering, direct engineering and assisted evolution of host-adapted pathogens, and genome-wide genetic screens are currently being used by researchers to better understand the host range of a variety of different pathogenic organisms.
Mechanisms
A pathogen displays tropism for a specific host if it can interact with the host cells in a way that supports pathogenic growth and infection. Various factors affect the ability of a pathogen to infect a particular cell, including: the structure of the cell's surface receptors; the availability of transcription factors that can identify pathogenic DNA or RNA; the ability of the cells and tissue to support viral or bacterial replication; and the presence of physical or chemical barriers within the cells and throughout the surrounding tissue.
Cell surface receptors
Pathogens frequently enter or adhere to host cells or tissues before causing infection. For this connection to occur, the pathogen must recognize the cell's surface and then bind to it. Viruses, for example, must often bind to specific cell surface receptors to enter a cell. Many viral membranes contain virion surface proteins that are specific to particular host cell surface receptors. If a host cel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Viruses depend on what type of cells?
A. host cells
B. anchor cells
C. blood cells
D. immune system cells
Answer:
|
|
sciq-4667
|
multiple_choice
|
What type of lenses can correct myopia?
|
[
"diffusion lenses",
"convex lenses",
"concave lenses",
"polarized lenses"
] |
C
|
Relavent Documents:
Document 0:::
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories).
History
Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time.
In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work
Document 1:::
Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types (spherical, aspheric, holographic, diffractive, etc.), as well as radius of curvature, distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it.
Design requirements
Performance requirements can include:
Optical performance (image quality): This is quantified by various metrics, including encircled energy, modulation transfer function, Strehl ratio, ghost reflection control, and pupil performance (size, location and aberration control); the choice of the image quality metric is application specific.
Physical requirements such as weight, static volume, dynamic volume, center of gravity and overall configuration requirements.
Environmental requirements: ranges for temperature, pressure, vibration and electromagnetic shielding.
Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties.
Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently
Document 2:::
Vertex distance is the distance between the back surface of a corrective lens, i.e. glasses (spectacles) or contact lenses, and the front of the cornea. Increasing or decreasing the vertex distance changes the optical properties of the system, by moving the focal point forward or backward, effectively changing the power of the lens relative to the eye. Since most refractions (the measurement that determines the power of a corrective lens) are performed at a vertex distance of 12–14 mm, the power of the correction may need to be modified from the initial prescription so that light reaches the patient's eye with the same effective power that it did through the phoropter or trial frame.
Vertex distance is important when converting between contact lens and glasses prescriptions and becomes significant if the glasses prescription is beyond ±4.00 diopters (often abbreviated D). The formula for vertex correction is , where F is the power corrected for vertex distance, F is the original lens power, and x is the change in vertex distance in meters.
Derivation
The vertex distance formula calculates what power lens (F) is needed to focus light on the same location if the lens has been moved by a distance x. To focus light to the same image location:
where f is the corrected focal length for the new lens, f is the focal length of the original lens, and x is the distance that the lens was moved. The value for x can be positive or negative depending on the sign convention. Lens power in diopters is the mathematical inverse of focal length in meters.
Substituting for lens power arrives at
After simplifying the final equation is found:
Examples
Example 1: example prescription adjustment from glasses to contacts
A phoropter measurement of a patient reads −8.00D sphere and −5.25D cylinder with an axis of 85° for one eye (the notation for which is typically written as ). The phoropter measurement is made at a common vertex distance of 12mm from the eye. The equivalent prescrip
Document 3:::
Retinal mosaic is the name given to the distribution of any particular type of neuron across any particular layer in the retina. Typically such distributions are somewhat regular; it is thought that this is so that each part of the retina is served by each type of neuron in processing visual information.
The regularity of retinal mosaics can be quantitatively studied by modelling the mosaic as a spatial point pattern. This is done by treating each cell as a single point and using spatial statistics such as the Effective Radius, Packing Factor and Regularity Index.
Using adaptive optics, it is nowadays possible to image the photoreceptor mosaic (i.e. the distribution of rods and cones) in living humans, enabling the detailed study of photoreceptor density and arrangement across the retina.
In the fovea (where photoreceptor density is highest) the spacing between adjacent receptors is about 6-8 micrometer. This corresponds to an angular resolution of approximately 0.5 arc minute, effectively the upper limit of human visual acuity.
Document 4:::
Foveated imaging is a digital image processing technique in which the image resolution, or amount of detail, varies across the image according to one or more "fixation points". A fixation point indicates the highest resolution region of the image and corresponds to the center of the eye's retina, the fovea.
The location of a fixation point may be specified in many ways.
For example, when viewing an image on a computer monitor, one may specify a fixation using a pointing device, like a computer mouse.
Eye trackers which precisely measure the eye's position and movement are also commonly used to determine fixation points in perception experiments.
When the display is manipulated with the use of an eye tracker, this is known as a gaze contingent display.
Fixations may also be determined automatically using computer algorithms.
Some common applications of foveated imaging include imaging sensor hardware and image compression. For descriptions of these and other applications, see the list below.
Foveated imaging is also commonly referred to as space variant imaging or gaze contingent imaging.
Applications
Compression
Contrast sensitivity falls off dramatically as one moves from the center of the retina to the periphery.
In lossy image compression, one may take advantage of this fact in order to compactly encode images.
If one knows the viewer's approximate point of gaze, one may reduce the amount of information contained in the image as the distance from the point of gaze increases. Because the fall-off in the eye's resolution is dramatic, the potential reduction in display information can be substantial. Also, foveation encoding may be applied to the image before other types of image compression are applied and therefore can result in a multiplicative reduction.
Foveated sensors
Foveated sensors are multiresolution hardware devices that allow image data to be collected with higher resolution concentrated at a fixation point. An advantage to using foveated sen
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of lenses can correct myopia?
A. diffusion lenses
B. convex lenses
C. concave lenses
D. polarized lenses
Answer:
|
|
sciq-6972
|
multiple_choice
|
Some animals prepare for the long winter by going into what?
|
[
"fermentation",
"suspended animation",
"hibernation",
"pollination"
] |
C
|
Relavent Documents:
Document 0:::
Dormancy is a period in an organism's life cycle when growth, development, and (in animals) physical activity are temporarily stopped. This minimizes metabolic activity and therefore helps an organism to conserve energy. Dormancy tends to be closely associated with environmental conditions. Organisms can synchronize entry to a dormant phase with their environment through predictive or consequential means. Predictive dormancy occurs when an organism enters a dormant phase before the onset of adverse conditions. For example, photoperiod and decreasing temperature are used by many plants to predict the onset of winter. Consequential dormancy occurs when organisms enter a dormant phase after adverse conditions have arisen. This is commonly found in areas with an unpredictable climate. While very sudden changes in conditions may lead to a high mortality rate among animals relying on consequential dormancy, its use can be advantageous, as organisms remain active longer and are therefore able to make greater use of available resources.
Animals
Hibernation
Hibernation is a mechanism used by many mammals to reduce energy expenditure and survive food shortages over the winter. Hibernation may be predictive or consequential. An animal prepares for hibernation by building up a thick layer of body fat during late summer and autumn that will provide it with energy during the dormant period. During hibernation, the animal undergoes many physiological changes, including decreased heart rate (by as much as 95%) and decreased body temperature. In addition to shivering, some hibernating animals also produce body heat by non-shivering thermogenesis to avoid freezing. Non-shivering thermogenesis is a regulated process in which the proton gradient generated by electron transport in mitochondria is used to produce heat instead of ATP in brown adipose tissue. Animals that hibernate include bats, ground squirrels and other rodents, mouse lemurs, the European hedgehog and other insectivo
Document 1:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 2:::
Heterothermy or heterothermia (from Greek ἕτερος heteros "other" and θέρμη thermē "heat") is a physiological term for animals that vary between self-regulating their body temperature, and allowing the surrounding environment to affect it. In other words, they exhibit characteristics of both poikilothermy and homeothermy.
Definition
Heterothermic animals are those that can switch between poikilothermic and homeothermic strategies. These changes in strategies typically occur on a daily basis or on an annual basis. More often than not, it is used as a way to dissociate the fluctuating metabolic rates seen in some small mammals and birds (e.g. bats and hummingbirds), from those of traditional cold blooded animals. In many bat species, body temperature and metabolic rate are elevated only during activity. When at rest, these animals reduce their metabolisms drastically, which results in their body temperature dropping to that of the surrounding environment. This makes them homeothermic when active, and poikilothermic when at rest. This phenomenon has been termed 'daily torpor' and was intensively studied in the Djungarian hamster. During the hibernation season, this animal shows strongly reduced metabolism each day during the rest phase while it reverts to endothermic metabolism during its active phase, leading to normal euthermic body temperatures (around 38 °C).
Larger mammals (e.g. ground squirrels) and bats show multi-day torpor bouts during hibernation (up to several weeks) in winter. During these multi-day torpor bouts, body temperature drops to ~1 °C above ambient temperature and metabolism may drop to about 1% of the normal endothermic metabolic rate. Even in these deep hibernators, the long periods of torpor is interrupted by bouts of endothermic metabolism, called arousals (typically lasting between 4–20 hours). These metabolic arousals cause body temperature to return to euthermic levels 35-37 °C. Most of the energy spent during hibernation is spent in arous
Document 3:::
Aestivation ( (summer); also spelled estivation in American English) is a state of animal dormancy, similar to hibernation, although taking place in the summer rather than the winter. Aestivation is characterized by inactivity and a lowered metabolic rate, that is entered in response to high temperatures and arid conditions. It takes place during times of heat and dryness, which are often the summer months.
Invertebrate and vertebrate animals are known to enter this state to avoid damage from high temperatures and the risk of desiccation. Both terrestrial and aquatic animals undergo aestivation. Fossil records suggest that aestivation may have evolved several hundred million years ago.
Physiology
Organisms that aestivate appear to be in a fairly "light" state of dormancy, as their physiological state can be rapidly reversed, and the organism can quickly return to a normal state. A study done on Otala lactea, a snail native to parts of Europe and Northern Africa, shows that they can wake from their dormant state within ten minutes of being introduced to a wetter environment.
The primary physiological and biochemical concerns for an aestivating animal are to conserve energy, retain water in the body, ration the use of stored energy, handle the nitrogenous end products, and stabilize bodily organs, cells, and macromolecules. This can be quite a task as hot temperatures and arid conditions may last for months, in some cases for years. The depression of metabolic rate during aestivation causes a reduction in macromolecule synthesis and degradation. To stabilise the macromolecules, aestivators will enhance antioxidant defenses and elevate chaperone proteins. This is a widely used strategy across all forms of hypometabolism. These physiological and biochemical concerns appear to be the core elements of hypometabolism throughout the animal kingdom. In other words, animals which aestivate appear to go through nearly the same physiological processes as animals that hibern
Document 4:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Some animals prepare for the long winter by going into what?
A. fermentation
B. suspended animation
C. hibernation
D. pollination
Answer:
|
|
ai2_arc-379
|
multiple_choice
|
If 10 grams of water are added to 5 grams of salt, how much salt water will be made?
|
[
"2 grams",
"5 grams",
"10 grams",
"15 grams"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
If 10 grams of water are added to 5 grams of salt, how much salt water will be made?
A. 2 grams
B. 5 grams
C. 10 grams
D. 15 grams
Answer:
|
|
sciq-589
|
multiple_choice
|
What is the process by which the egg is fertilized by the pollen of the same flower?
|
[
"self pollination",
"self formation",
"self condensation",
"self realization"
] |
A
|
Relavent Documents:
Document 0:::
Pollen is a powdery substance produced by most types of flowers of seed plants for the purpose of sexual reproduction. It consists of pollen grains (highly reduced microgametophytes), which produce male gametes (sperm cells). Pollen grains have a hard coat made of sporopollenin that protects the gametophytes during the process of their movement from the stamens to the pistil of flowering plants, or from the male cone to the female cone of gymnosperms. If pollen lands on a compatible pistil or female cone, it germinates, producing a pollen tube that transfers the sperm to the ovule containing the female gametophyte. Individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is highly useful in paleoecology, paleontology, archaeology, and forensics.
Pollen in plants is used for transferring haploid male genetic material from the anther of a single flower to the stigma of another in cross-pollination. In a case of self-pollination, this process takes place from the anther of a flower to the stigma of the same flower.
Pollen is infrequently used as food and food supplement. Because of agricultural practices, it is often contaminated by agricultural pesticides.
Structure and formation
Pollen itself is not the male gamete. It is a gametophyte, something that could be considered an entire organism, which then produces the male gamete. Each pollen grain contains vegetative (non-reproductive) cells (only a single cell in most flowering plants but several in other seed plants) and a generative (reproductive) cell. In flowering plants the vegetative tube cell produces the pollen tube, and the generative cell divides to form the two sperm nuclei.
Pollen comes in many different shapes. Some pollen grains are based on geodesic polyhedra like a soccer ball.
Formation
Pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower. Pollen g
Document 1:::
Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction.
Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators.
Use of sexual terminology
Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm).
In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (inc
Document 2:::
Sterile male plants are plants which are incapable of producing pollen. This is sometimes attributed to mutations in the mitochondrial DNA which affects the Tapetum cells in anthers which are responsible for nursing developing pollen. The mutations cause the breakdown of the mitochondria in these specific cells and result in cell death and so pollen production is interrupted. These observations have now led to transgenic sterile male plants to be made in order to create hybrid seeds, by inserting transgenes which are specifically poisonous to Tapetum cells.
Plant reproduction
Document 3:::
A pollen tube is a tubular structure produced by the male gametophyte of seed plants when it germinates. Pollen tube elongation is an integral stage in the plant life cycle. The pollen tube acts as a conduit to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. In maize, this single cell can grow longer than to traverse the length of the pistil.
Pollen tubes were first discovered by Giovanni Battista Amici in the 19th century.
They are used as a model for understanding plant cell behavior. Research is ongoing to comprehend how the pollen tube responds to extracellular guidance signals to achieve fertilization.
Description
Pollen tubes are produced by the male gametophytes of seed plants. Pollen tubes act as conduits to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. Pollen tubes are unique to seed plants and their structures have evolved over their history since the Carboniferous period. Pollen tube formation is complex and the mechanism is not fully understood, but is of great interest to scientists because pollen tubes transport the male gametes produced by pollen grains to the female gametophyte. Once a pollen grain has implanted on a compatible stigma, its germination is initiated. During this process, the pollen grain begins to bulge outwards to form a tube-like structure, known as the pollen tube. The pollen tube structure rapidly descends down the length of the style via tip-directed growth, reaching rates of 1 cm/h, whilst carrying two non-motile sperm cells. Upon reaching the ovule the pollen tube ruptures, thereby delivering the sperm cells to the female gametophyte. In flowering plants a double fertilization event occurs. The first fertilization event produces a diplo
Document 4:::
Fertilisation or fertilization (see spelling differences), also known as generative fertilisation, syngamy and impregnation, is the fusion of gametes to give rise to a new individual organism or offspring and initiate its development. While processes such as insemination or pollination which happen before the fusion of gametes are also sometimes informally referred to as fertilisation, these are technically separate processes. The cycle of fertilisation and development of new individuals is called sexual reproduction. During double fertilisation in angiosperms the haploid male gamete combines with two haploid polar nuclei to form a triploid primary endosperm nucleus by the process of vegetative fertilisation.
History
In Antiquity, Aristotle conceived the formation of new individuals through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic.
In 1784, Spallanzani established the need of interaction between the female's ovum and male's sperm to form a zygote in frogs. In 1827, von Baer observed a therian mammalian egg for the first time. Oscar Hertwig (1876), in Germany, described the fusion of nuclei of spermatozoa and of ova from sea urchin.
Evolution
The evolution of fertilisation is related to the origin of meiosis, as both are part of sexual reproduction, originated in eukaryotes. One theory states that meiosis originated from mitosis.
Fertilisation in plants
The gametes that participate in fertilisation of plants are the sperm (male) and the egg (female) cell. Various families of plants have differing methods by which the gametes produced by the male and female gametophytes come together and are fertilised. In Bryophyte land plants, fertilisation of the sperm and egg takes place within the archegonium. In seed plants, the male gametophyte is called a pollen grain. After pollination, the pollen grain germinates, and a pollen tube grows and penetrates the ovule through a tiny pore called a mic
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process by which the egg is fertilized by the pollen of the same flower?
A. self pollination
B. self formation
C. self condensation
D. self realization
Answer:
|
|
sciq-3429
|
multiple_choice
|
What do you require to test a hypothesis?
|
[
"estimates",
"conclusion",
"opinion",
"data"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
Advanced Placement (AP) Statistics (also known as AP Stats) is a college-level high school statistics course offered in the United States through the College Board's Advanced Placement program. This course is equivalent to a one semester, non-calculus-based introductory college statistics course and is normally offered to sophomores, juniors and seniors in high school.
One of the College Board's more recent additions, the AP Statistics exam was first administered in May 1996 to supplement the AP program's math offerings, which had previously consisted of only AP Calculus AB and BC. In the United States, enrollment in AP Statistics classes has increased at a higher rate than in any other AP class.
Students may receive college credit or upper-level college course placement upon passing the three-hour exam ordinarily administered in May. The exam consists of a multiple-choice section and a free-response section that are both 90 minutes long. Each section is weighted equally in determining the students' composite scores.
History
The Advanced Placement program has offered students the opportunity to pursue college-level courses while in high school. Along with the Educational Testing Service, the College Board administered the first AP Statistics exam in May 1997. The course was first taught to students in the 1996-1997 academic year. Prior to that, the only mathematics courses offered in the AP program included AP Calculus AB and BC. Students who didn't have a strong background in college-level math, however, found the AP Calculus program inaccessible and sometimes declined to take a math course in their senior year. Since the number of students required to take statistics in college is almost as large as the number of students required to take calculus, the College Board decided to add an introductory statistics course to the AP program. Since the prerequisites for such a program doesn't require mathematical concepts beyond those typically taught in a second-year al
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you require to test a hypothesis?
A. estimates
B. conclusion
C. opinion
D. data
Answer:
|
|
sciq-4385
|
multiple_choice
|
What is the term for evolution over geologic time above the level of the species?
|
[
"macroevolution",
"speciation",
"mutation",
"microevolution"
] |
A
|
Relavent Documents:
Document 0:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 1:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 2:::
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce
Document 3:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 4:::
Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution.
Description
While the traditional model of paleontology, the phylogenetic model, posits that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes don't happen over a gradual period but in localized, rare, rapid events of branching speciation.
Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states.
History
In 1983, Malmgren and colleagues published a paper called "Evidence for punctuated gradualism in the late Neogene Globorotalia tumida lineage of planktonic foraminifera." This paper studied the lineage of planktonic foraminifera, specifically the evolutionary transition from G. plesiotumida to G. tumida across the Miocene/Pliocene boundary. The study found that the G. tumida lineage, while remaining in relative stasis over a considerable part of its total duration underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching. Based on these findings, Malmgren and colleagues introduced a new mode of evolution and proposed to call it "punctuated gradualism." There is strong evidence supporting both gradual evolution of a species over time and rapid events of species evolution separated by periods of little evolutionary change. Organisms have a great propensity to adapt and evolve depending on the circumstances.
Studies
Studies
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for evolution over geologic time above the level of the species?
A. macroevolution
B. speciation
C. mutation
D. microevolution
Answer:
|
|
sciq-10096
|
multiple_choice
|
What type of physics explains the behavior of visible light and electromagnetic waves?
|
[
"Thermodynamics",
"statistics",
"Quantum mechanics",
"optics"
] |
D
|
Relavent Documents:
Document 0:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 1:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 2:::
The study of electromagnetism in higher education, as a fundamental part of both physics and engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John David Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. Also at an undergraduate level, Richard Feynman's classic The Feynman Lectures on Physics is available online to read for free.
Undergraduate
There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. The Feynman Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics.
Graduate
A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John David Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. In a 2015 review of Andrew Zangwill's Modern Electrodynamics in
Document 3:::
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.
Overview
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical i
Document 4:::
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of physics explains the behavior of visible light and electromagnetic waves?
A. Thermodynamics
B. statistics
C. Quantum mechanics
D. optics
Answer:
|
|
ai2_arc-588
|
multiple_choice
|
Ellie is growing a vegetable garden. In which season do the plants in Ellie's garden receive the most energy from the Sun for growing?
|
[
"fall",
"spring",
"summer",
"winter"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
A season is a division of the year marked by changes in weather, ecology, and the amount of daylight. The growing season is that portion of the year in which local conditions (i.e. rainfall, temperature, daylight) permit normal plant growth. While each plant or crop has a specific growing season that depends on its genetic adaptation, growing seasons can generally be grouped into macro-environmental classes.
Axial tilt of the Earth inherently affect growing seasons across the globe.
Geography
Geographic conditions have major impacts on the growing season for any given area. Latitude is one of the major factors in the length of the growing season. The further from the equator one goes, the angle of the Sun gets lower in the sky. Consequently, sunlight is less direct and the low angle of the Sun means that soil takes longer to warm during the spring months, so the growing season begins later. The other factor is altitude, with high elevations having cooler temperatures which shortens the growing season compared with a low-lying area of the same latitude.
Season extension
Locations
North America
The continental United States ranges from 49° north at the US-Canadian border to 25° north at the southern tip of the US-Mexican border. Most populated areas of Canada are below the 55th parallel. North of the 45th parallel, the growing season is generally 4–5 months, beginning in late April or early May and continuing to late September-early October, and is characterized by warm summers and cold winters with heavy snow. South of the 30th parallel, the growing season is year-round in many areas with hot summers and mild winters. Cool season crops such as peas, lettuce, and spinach are planted in fall or late winter, while warm season crops such as beans and corn are planted in late winter to early spring. In the desert Southwest, the growing season effectively runs in winter, from October to April as the summer months are characterized by extreme heat and arid conditions,
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Ellie is growing a vegetable garden. In which season do the plants in Ellie's garden receive the most energy from the Sun for growing?
A. fall
B. spring
C. summer
D. winter
Answer:
|
|
sciq-3450
|
multiple_choice
|
Fish use some of their fins to propel themselves through the water and others to do what?
|
[
"breathe",
"rest",
"reproduce",
"steer"
] |
D
|
Relavent Documents:
Document 0:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
Document 1:::
Fish locomotion is the various types of animal locomotion used by fish, principally by swimming. This is achieved in different groups of fish by a variety of mechanisms of propulsion, most often by wave-like lateral flexions of the fish's body and tail in the water, and in various specialised fish by motions of the fins. The major forms of locomotion in fish are:
Anguilliform, in which a wave passes evenly along a long slender body;
Sub-carangiform, in which the wave increases quickly in amplitude towards the tail;
Carangiform, in which the wave is concentrated near the tail, which oscillates rapidly;
Thunniform, rapid swimming with a large powerful crescent-shaped tail; and
Ostraciiform, with almost no oscillation except of the tail fin.
More specialized fish include movement by pectoral fins with a mainly stiff body, opposed sculling with dorsal and anal fins, as in the sunfish; and movement by propagating a wave along the long fins with a motionless body, as in the knifefish or featherbacks.
In addition, some fish can variously "walk" (i.e., crawl over land using the pectoral and pelvic fins), burrow in mud, leap out of the water and even glide temporarily through the air.
Swimming
Fish swim by exerting force against the surrounding water. There are exceptions, but this is normally achieved by the fish contracting muscles on either side of its body in order to generate waves of flexion that travel the length of the body from nose to tail, generally getting larger as they go along. The vector forces exerted on the water by such motion cancel out laterally, but generate a net force backwards which in turn pushes the fish forward through the water. Most fishes generate thrust using lateral movements of their body and caudal fin, but many other species move mainly using their median and paired fins. The latter group swim slowly, but can turn rapidly, as is needed when living in coral reefs for example. But they can't swim as fast as fish using their bodies an
Document 2:::
A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts.
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship.
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational
Document 3:::
Aquatic locomotion or swimming is biologically propelled motion through a liquid medium. The simplest propulsive systems are composed of cilia and flagella. Swimming has evolved a number of times in a range of organisms including arthropods, fish, molluscs, amphibians, reptiles, birds, and mammals.
Evolution of swimming
Swimming evolved a number of times in unrelated lineages. Supposed jellyfish fossils occur in the Ediacaran, but the first free-swimming animals appear in the Early to Middle Cambrian. These are mostly related to the arthropods, and include the Anomalocaridids, which swam by means of lateral lobes in a fashion reminiscent of today's cuttlefish. Cephalopods joined the ranks of the nekton in the late Cambrian, and chordates were probably swimming from the Early Cambrian. Many terrestrial animals retain some capacity to swim, however some have returned to the water and developed the capacities for aquatic locomotion. Most apes (including humans), however, lost the swimming instinct.
In 2013 Pedro Renato Bender, a research fellow at the University of the Witwatersrand's Institute for Human Evolution, proposed a theory to explain the loss of that instinct. Termed the Saci last common ancestor hypothesis (after Saci, a Brazilian folklore character who cannot cross water barriers), it holds that the loss of instinctive swimming ability in apes is best explained as a consequence of constraints related to the adaptation to an arboreal life in the last common ancestor of apes. Bender hypothesized that the ancestral ape increasingly avoided deep-water bodies when the risks of being exposed to water were clearly higher than the advantages of crossing them. A decreasing contact with water bodies then could have led to the disappearance of the doggy paddle instinct.
Micro-organisms
Microbial swimmers, sometimes called microswimmers, are microscopic entities that have the ability to move in fluid or aquatic environment. Natural microswimmers are found e
Document 4:::
Undulatory locomotion is the type of motion characterized by wave-like movement patterns that act to propel an animal forward. Examples of this type of gait include crawling in snakes, or swimming in the lamprey. Although this is typically the type of gait utilized by limbless animals, some creatures with limbs, such as the salamander, forgo use of their legs in certain environments and exhibit undulatory locomotion. In robotics this movement strategy is studied in order to create novel robotic devices capable of traversing a variety of environments.
Environmental interactions
In limbless locomotion, forward locomotion is generated by propagating flexural waves along the length of the animal's body. Forces generated between the animal and surrounding environment lead to a generation of alternating sideways forces that act to move the animal forward. These forces generate thrust and drag.
Hydrodynamics
Simulation predicts that thrust and drag are dominated by viscous forces at low Reynolds numbers and inertial forces at higher Reynolds numbers. When the animal swims in a fluid, two main forces are thought to play a role:
Skin Friction: Generated due to the resistance of a fluid to shearing and is proportional to speed of the flow. This dominates undulatory swimming in spermatozoa and the nematode
Form Force: Generated by the differences in pressure on the surface of the body and it varies with the square of flow speed.
At low Reynolds number (Re~100), skin friction accounts for nearly all of the thrust and drag. For those animals which undulate at intermediate Reynolds number (Re~101), such as the Ascidian larvae, both skin friction and form force account for the production of drag and thrust. At high Reynolds number (Re~102), both skin friction and form force act to generate drag, but only form force produces thrust.
Kinematics
In animals that move without use of limbs, the most common feature of the locomotion is a rostral to caudal wave that travel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Fish use some of their fins to propel themselves through the water and others to do what?
A. breathe
B. rest
C. reproduce
D. steer
Answer:
|
|
sciq-6343
|
multiple_choice
|
What term describes a type of redox reaction in which the same substance is both oxidized and reduced?
|
[
"disproportion",
"interposition",
"disapprobation",
"misappropriation"
] |
A
|
Relavent Documents:
Document 0:::
In situ chemical reduction (ISCR) is a type of environmental remediation technique used for soil and/or groundwater remediation to reduce the concentrations of targeted environmental contaminants to acceptable levels. It is the mirror process of In Situ Chemical Oxidation (ISCO). ISCR is usually applied in the environment by injecting chemically reductive additives in liquid form into the contaminated area or placing a solid medium of chemical reductants in the path of a contaminant plume. It can be used to remediate a variety of organic compounds, including some that are resistant to natural degradation.
The in situ in ISCR is just Latin for "in place", signifying that ISCR is a chemical reduction reaction that occurs at the site of the contamination. Like ISCO, it is able to decontaminate many compounds, and, in theory, ISCR could be more effective in ground water remediation than ISCO.
Chemical reduction is one half of a redox reaction, which results in the gain of electrons. One of the reactants in the reaction becomes oxidized, or loses electrons, while the other reactant becomes reduced, or gains electrons. In ISCR, reducing compounds, compounds that accept electrons given by other compounds in a reaction, are used to change the contaminants into harmless compounds.
History
Early work examined the dechlorinations with copper. Substrates included DDT, endrin, chloroform, and hexachlorocyclopentadiene. Aluminum and magnesium behave similarly in the laboratory. Ground water treatment most generally focuses on the use of iron.
Reductants
Zero valent metals (ZVMs)
Zero-valent metals are the main reductants used in ISCR. The most common metal used is iron, in the form of ZVI (zero valent iron), and it is also the metal longest in use. However, some studies show that zero valent zinc (ZVZ) could be up to ten times more effective at eradicating the contaminants than ZVI. Some applications of ZVMs are to clean up Trichloroethylene (TCE) and Hexavalent chromium
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The limiting reagent (or limiting reactant or limiting agent) in a chemical reaction is a reactant that is totally consumed when the chemical reaction is completed. The amount of product formed is limited by this reagent, since the reaction cannot continue without it. If one or more other reagents are present in excess of the quantities required to react with the limiting reagent, they are described as excess reagents or excess reactants (sometimes abbreviated as "xs"), or to be in abundance.
The limiting reagent must be identified in order to calculate the percentage yield of a reaction since the theoretical yield is defined as the amount of product obtained when the limiting reagent reacts completely. Given the balanced chemical equation, which describes the reaction, there are several equivalent ways to identify the limiting reagent and evaluate the excess quantities of other reagents.
Method 1: Comparison of reactant amounts
This method is most useful when there are only two reactants. One reactant (A) is chosen, and the balanced chemical equation is used to determine the amount of the other reactant (B) necessary to react with A. If the amount of B actually present exceeds the amount required, then B is in excess and A is the limiting reagent. If the amount of B present is less than required, then B is the limiting reagent.
Example for two reactants
Consider the combustion of benzene, represented by the following chemical equation:
2 C6H6(l) + 15 O2(g) -> 12 CO2(g) + 6 H2O(l)
This means that 15 moles of molecular oxygen (O2) is required to react with 2 moles of benzene (C6H6)
The amount of oxygen required for other quantities of benzene can be calculated using cross-multiplication (the rule of three). For example,
if 1.5 mol C6H6 is present, 11.25 mol O2 is required:
If in fact 18 mol O2 are present, there will be an excess of (18 - 11.25) = 6.75 mol of unreacted oxygen when all the benzene is consumed. Benzene is then the limiting reagent.
This concl
Document 3:::
Nitrous acid (molecular formula ) is a weak and monoprotic acid known only in solution, in the gas phase and in the form of nitrite () salts. Nitrous acid is used to make diazonium salts from amines. The resulting diazonium salts are reagents in azo coupling reactions to give azo dyes.
Structure
In the gas phase, the planar nitrous acid molecule can adopt both a syn and an anti form. The anti form predominates at room temperature, and IR measurements indicate it is more stable by around 2.3 kJ/mol.
Preparation
Nitrous acid is usually generated by acidification of aqueous solutions of sodium nitrite with a mineral acid. The acidification is usually conducted at ice temperatures, and the HNO2 is consumed in situ. Free nitrous acid is unstable and decomposes rapidly.
Nitrous acid can also be produced by dissolving dinitrogen trioxide in water according to the equation
N2O3 + H2O → 2 HNO2
Reactions
Nitrous acid is the main chemphore in the Liebermann reagent, used to spot-test for alkaloids.
Decomposition
Gaseous nitrous acid, which is rarely encountered, decomposes into nitrogen dioxide, nitric oxide, and water:
2 HNO2 → NO2 + NO + H2O
Nitrogen dioxide disproportionates into nitric acid and nitrous acid in aqueous solution:
2 NO2 + H2O → HNO3 + HNO2
In warm or concentrated solutions, the overall reaction amounts to production of nitric acid, water, and nitric oxide:
3 HNO2 → HNO3 + 2 NO + H2O
The nitric oxide can subsequently be re-oxidized by air to nitric acid, making the overall reaction:
2 HNO2 + O2 → 2 HNO3
Reduction
With I− and Fe2+ ions, NO is formed:
2 HNO2 + 2 KI + 2 H2SO4 → I2 + 2 NO + 2 H2O + 2 K2SO4
2 HNO2 + 2 FeSO4 + 2 H2SO4 → Fe2(SO4)3 + 2 NO + 2 H2O + K2SO4
With Sn2+ ions, N2O is formed:
Document 4:::
Exclusive or or exclusive disjunction or exclusive alternation, also known as non-equivalence which is the negation of equivalence, is a logical operation that is true if and only if its arguments differ (one is true, the other is false).
It is symbolized by the prefix operator and by the infix operators XOR (, , or ), EOR, EXOR, , , , ⩛, , and .
It gains the name "exclusive or" because the meaning of "or" is ambiguous when both operands are true; the exclusive or operator excludes that case. This is sometimes thought of as "one or the other but not both" or "either one or the other". This could be written as "A or B, but not, A and B".
XOR is equivalent to logical inequality (NEQ) since it is true only when the inputs are different (one is true, and one is false). The negation of XOR is the logical biconditional, which yields true if and only if the two inputs are the same, which is equivalent to logical equality (EQ).
Since it is associative, it may be considered to be an n-ary operator which is true if and only if an odd number of arguments are true. That is, a XOR b XOR ... may be treated as XOR(a,b,...).
Definition
The truth table of shows that it outputs true whenever the inputs differ:
Equivalences, elimination, and introduction
Exclusive disjunction essentially means 'either one, but not both nor none'. In other words, the statement is true if and only if one is true and the other is false. For example, if two horses are racing, then one of the two will win the race, but not both of them. The exclusive disjunction , also denoted by or , can be expressed in terms of the logical conjunction ("logical and", ), the disjunction ("logical or", ), and the negation () as follows:
The exclusive disjunction can also be expressed in the following way:
This representation of XOR may be found useful when constructing a circuit or network, because it has only one operation and small number of and operations. A proof of this identity is given below:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term describes a type of redox reaction in which the same substance is both oxidized and reduced?
A. disproportion
B. interposition
C. disapprobation
D. misappropriation
Answer:
|
|
sciq-5291
|
multiple_choice
|
All members of a species living together form a what?
|
[
"organization",
"group",
"family",
"population"
] |
D
|
Relavent Documents:
Document 0:::
A heterarchy is a system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways. Definitions of the term vary among the disciplines: in social and information sciences, heterarchies are networks of elements in which each element shares the same "horizontal" position of power and authority, each playing a theoretically equal role. In biological taxonomy, however, the requisite features of heterarchy involve, for example, a species sharing, with a species in a different family, a common ancestor which it does not share with members of its own family. This is theoretically possible under principles of "horizontal gene transfer".
A heterarchy may be parallel to a hierarchy, subsumed to a hierarchy, or it may contain hierarchies; the two kinds of structure are not mutually exclusive. In fact, each level in a hierarchical system is composed of a potentially heterarchical group which contains its constituent elements.
The concept of heterarchy was first employed in a modern context by cybernetician Warren McCulloch in 1945. As Carole L. Crumley has summarised, "[h]e examined alternative cognitive structure(s), the collective organization of which he termed heterarchy. He demonstrated that the human brain, while reasonably orderly was not organized hierarchically. This understanding revolutionized the neural study of the brain and solved major problems in the fields of artificial intelligence and computer design."
General principles, operationalization, and evidence
In a group of related items, heterarchy is a state wherein any pair of items is likely to be related in two or more differing ways. Whereas hierarchies sort groups into progressively smaller categories and subcategories, heterarchies divide and unite groups variously, according to multiple concerns that emerge or recede from view according to perspective. Crucially, no one way of dividing a heterarchica
Document 1:::
Ecological units, comprise concepts such as population, community, and ecosystem as the basic units, which are at the basis of ecological theory and research, as well as a focus point of many conservation strategies. The concept of ecological units continues to suffer from inconsistencies and confusion over its terminology. Analyses of the existing concepts used in describing ecological units have determined that they differ in respects to four major criteria:
The questions as to whether they are defined statistically or via a network of interactions,
If their boundaries are drawn by topographical or process-related criteria,
How high the required internal relationships are,
And if they are perceived as "real" entities or abstractions by an observer.
A population is considered to be the smallest ecological unit, consisting of a group of individuals that belong to the same species. A community would be the next classification, referring to all of the population present in an area at a specific time, followed by an ecosystem, referring to the community and it's interactions with its physical environment. An ecosystem is the most commonly used ecological unit and can be universally defined by two common traits:
The unit is often defined in terms of a natural border (maritime boundary, watersheds, etc.)
Abiotic components and organisms within the unit are considered to be interlinked.
See also
Biogeographic realm
Ecoregion
Ecotope
Holobiont
Functional ecology
Behavior settings
Regional geology
Document 2:::
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration.
The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics.
History
In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.
Terminology
A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature).
Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates
Document 3:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 4:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All members of a species living together form a what?
A. organization
B. group
C. family
D. population
Answer:
|
|
sciq-30
|
multiple_choice
|
Where do angiosperms produce seeds in flowers?
|
[
"testes",
"germs",
"ovaries",
"cones"
] |
C
|
Relavent Documents:
Document 0:::
In the flowering plants, an ovary is a part of the female reproductive organ of the flower or gynoecium. Specifically, it is the part of the pistil which holds the ovule(s) and is located above or below or at the point of connection with the base of the petals and sepals. The pistil may be made up of one carpel or of several fused carpels (e.g. dicarpel or tricarpel), and therefore the ovary can contain part of one carpel or parts of several fused carpels. Above the ovary is the style and the stigma, which is where the pollen lands and germinates to grow down through the style to the ovary, and, for each individual pollen grain, to fertilize one individual ovule. Some wind pollinated flowers have much reduced and modified ovaries.
Fruits
A fruit is the mature, ripened ovary of a flower following double fertilization in an angiosperm. Because gymnosperms do not have an ovary but reproduce through double fertilization of unprotected ovules, they produce naked seeds that do not have a surrounding fruit, this meaning that juniper and yew "berries" are not fruits, but modified cones. Fruits are responsible for the dispersal and protection of seeds in angiosperms and cannot be easily characterized due to the differences in defining culinary and botanical fruits.
Development
After double fertilization and ripening, the ovary becomes the fruit, the ovules inside the ovary become the seeds of that fruit, and the egg within the ovule becomes the zygote. Double fertilization of the central cell in the ovule produces the nutritious endosperm tissue that surrounds the developing zygote within the seed. Angiosperm ovaries do not always produce a fruit after the ovary has been fertilized. Problems that can arise during the developmental process of the fruit include genetic issues, harsh environmental conditions, and insufficient energy which may be caused by competition for resources between ovaries; any of these situations may prevent maturation of the ovary.
Dispersal a
Document 1:::
A seedling is a young sporophyte developing out of a plant embryo from a seed. Seedling development starts with germination of the seed. A typical young seedling consists of three main parts: the radicle (embryonic root), the hypocotyl (embryonic shoot), and the cotyledons (seed leaves). The two classes of flowering plants (angiosperms) are distinguished by their numbers of seed leaves: monocotyledons (monocots) have one blade-shaped cotyledon, whereas dicotyledons (dicots) possess two round cotyledons. Gymnosperms are more varied. For example, pine seedlings have up to eight cotyledons. The seedlings of some flowering plants have no cotyledons at all. These are said to be acotyledons.
The plumule is the part of a seed embryo that develops into the shoot bearing the first true leaves of a plant. In most seeds, for example the sunflower, the plumule is a small conical structure without any leaf structure. Growth of the plumule does not occur until the cotyledons have grown above ground. This is epigeal germination. However, in seeds such as the broad bean, a leaf structure is visible on the plumule in the seed. These seeds develop by the plumule growing up through the soil with the cotyledons remaining below the surface. This is known as hypogeal germination.
Photomorphogenesis and etiolation
Dicot seedlings grown in the light develop short hypocotyls and open cotyledons exposing the epicotyl. This is also referred to as photomorphogenesis. In contrast, seedlings grown in the dark develop long hypocotyls and their cotyledons remain closed around the epicotyl in an apical hook. This is referred to as skotomorphogenesis or etiolation. Etiolated seedlings are yellowish in color as chlorophyll synthesis and chloroplast development depend on light. They will open their cotyledons and turn green when treated with light.
In a natural situation, seedling development starts with skotomorphogenesis while the seedling is growing through the soil and attempting to reach the
Document 2:::
Agamous (AG) is a homeotic gene and MADS-box transcription factor from Arabidopsis thaliana. The TAIR AGI number is AT4G18960.
The identity of a floral organ is determined by particular combinations of homeotic genes, these genes derive from a group of undifferentiated cells known as the floral meristem. The presence of the homeotic gene in Arabidopsis ceases all meristem activity and proceeds to facilitate the development of stamens and carpels.
Document 3:::
In botany, floral morphology is the study of the diversity of forms and structures presented by the flower, which, by definition, is a branch of limited growth that bears the modified leaves responsible for reproduction and protection of the gametes, called floral pieces.
Fertile leaves or sporophylls carry sporangiums, which will produce male and female gametes and therefore are responsible for producing the next generation of plants. The sterile leaves are modified leaves whose function is to protect the fertile parts or to attract pollinators. The branch of the flower that joins the floral parts to the stem is a shaft called the pedicel, which normally dilates at the top to form the receptacle in which the various floral parts are inserted.
All spermatophytes ("seed plants") possess flowers as defined here (in a broad sense), but the internal organization of the flower is very different in the two main groups of spermatophytes: living gymnosperms and angiosperms. Gymnosperms may possess flowers that are gathered in strobili, or the flower itself may be a strobilus of fertile leaves. Instead a typical angiosperm flower possesses verticils or ordered whorls that, from the outside in, are composed first of sterile parts, commonly called sepals (if their main function is protective) and petals (if their main function is to attract pollinators), and then the fertile parts, with reproductive function, which are composed of verticils or whorls of stamens (which carry the male gametes) and finally carpels (which enclose the female gametes).
The arrangement of the floral parts on the axis, the presence or absence of one or more floral parts, the size, the pigmentation and the relative arrangement of the floral parts are responsible for the existence of a great variety of flower types. Such diversity is particularly important in phylogenetic and taxonomic studies of angiosperms. The evolutionary interpretation of the different flower types takes into account aspects of
Document 4:::
The fossil history of flowering plants records the development of flowers and other distinctive structures of the angiosperms, now the dominant group of plants on land. The history is controversial as flowering plants appear in great diversity in the Cretaceous, with scanty and debatable records before that, creating a puzzle for evolutionary biologists that Charles Darwin named an "abominable mystery".
Paleozoic
Fossilised spores suggest that land plants (embryophytes) have existed for at least 475 million years. Early land plants reproduced sexually with flagellated, swimming sperm, like the green algae from which they evolved. An adaptation to terrestrial life was the development of upright sporangia for dispersal by spores to new habitats. This feature is lacking in the descendants of their nearest algal relatives, the Charophycean green algae. A later terrestrial adaptation took place with retention of the delicate, avascular sexual stage, the gametophyte, within the tissues of the vascular sporophyte. This occurred by spore germination within sporangia rather than spore release, as in non-seed plants. A current example of how this might have happened can be seen in the precocious spore germination in Selaginella, the spike-moss. The result for the ancestors of angiosperms and gymnosperms was enclosing the female gamete in a case, the seed.
The first seed-bearing plants were gymnosperms, like the ginkgo, and conifers (such as pines and firs). These did not produce flowers. The pollen grains (male gametophytes) of Ginkgo and cycads produce a pair of flagellated, mobile sperm cells that "swim" down the developing pollen tube to the female and her eggs.
Angiosperms appear suddenly and in great diversity in the fossil record in the Early Cretaceous. This poses such a problem for the theory of gradual evolution that Charles Darwin called it an "abominable mystery". Several groups of extinct gymnosperms, in particular seed ferns, have been proposed as the ancest
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do angiosperms produce seeds in flowers?
A. testes
B. germs
C. ovaries
D. cones
Answer:
|
|
sciq-1844
|
multiple_choice
|
What does subduction of a plate lead to?
|
[
"bending and volcanism",
"swell and volcanism",
"pressure and volcanism",
"melting and volcanism"
] |
D
|
Relavent Documents:
Document 0:::
Slab pull is a geophysical mechanism whereby the cooling and subsequent densifying of a subducting tectonic plate produces a downward force along the rest of the plate. In 1975 Forsyth and Uyeda used the inverse theory method to show that, of the many forces likely to be driving plate motion, slab pull was the strongest. Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches. This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%.
Carlson et al. (1983) in Lallemandet al. (2005) defined the slab pull force as:
Where:
K is (gravitational acceleration = 9.81 m/s2) according to McNutt (1984);
Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere;
L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary);
A is the slab age in Ma at the trench.
The slab pull force manifests itself between two extreme forms:
The aseismic back-arc extension as in the Izu–Bonin–Mariana Arc.
And as the Aleutian and Chile tectonics with strong earthquakes and back-arc thrusting.
Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow.
Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates
Document 1:::
The plate theory is a model of volcanism that attributes all volcanic activity on Earth, even that which appears superficially to be anomalous, to the operation of plate tectonics. According to the plate theory, the principal cause of volcanism is extension of the lithosphere. Extension of the lithosphere is a function of the lithospheric stress field. The global distribution of volcanic activity at a given time reflects the contemporaneous lithospheric stress field, and changes in the spatial and temporal distribution of volcanoes reflect changes in the stress field. The main factors governing the evolution of the stress field are:
Changes in the configuration of plate boundaries.
Vertical motions.
Thermal contraction.
Lithospheric extension enables pre-existing melt in the crust and mantle to escape to the surface. If extension is severe and thins the lithosphere to the extent that the asthenosphere rises, then additional melt is produced by decompression upwelling.
Origins of the plate theory
Developed during the late 1960s and 1970s, plate tectonics provided an elegant explanation for most of the Earth's volcanic activity. At spreading boundaries where plates move apart, the asthenosphere decompresses and melts to form new oceanic crust. At subduction zones, slabs of oceanic crust sink into the mantle, dehydrate, and release volatiles which lower the melting temperature and give rise to volcanic arcs and back-arc extensions. Several volcanic provinces, however, do not fit this simple picture and have traditionally been considered exceptional cases which require a non-plate-tectonic explanation.
Just prior to the development of plate tectonics in the early 1960s, the Canadian Geophysicist John Tuzo Wilson suggested that chains of volcanic islands form from movement of the seafloor over relatively stationary hotspots in stable centres of mantle convection cells. In the early 1970s, Wilson's idea was revived by the American geophysicist W. Jason Morgan. In
Document 2:::
In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range.
Overview
In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates.
Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments.
An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia.
Paleontological use
When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin).
Document 3:::
In geodynamics lower crustal flow is the mainly lateral movement of material within the lower part of the continental crust by a ductile flow mechanism. It is thought to be an important process during both continental collision and continental break-up.
Rheology
The tendency of the lower crust to flow is controlled by its rheology. Ductile flow in the lower crust is assumed to be controlled by the deformation of quartz and/or plagioclase feldspar as its composition is thought to be granodioritic to dioritic. With normal thickness continental crust and a normal geothermal gradient, the lower crust, below the brittle–ductile transition zone, exhibits ductile flow behaviour under geological strain rates. Factors that can vary this behaviour include: water content, thickness, heat flow and strain-rate.
Collisional belts
In some areas of continental collision, the lower part of the thickened crust that results is interpreted to flow laterally, such as in the Tibetan plateau, and the Altiplano in the Bolivian Andes.
Document 4:::
Mantle convection is the very slow creeping motion of Earth's solid silicate mantle as convection currents carry heat from the interior to the planet's surface.
The Earth's surface lithosphere rides atop the asthenosphere and the two form the components of the upper mantle. The lithosphere is divided into a number of tectonic plates that are continuously being created or consumed at plate boundaries. Accretion occurs as mantle is added to the growing edges of a plate, associated with seafloor spreading. Upwelling beneath the spreading centers is a shallow, rising component of mantle convection and in most cases not directly linked to the global mantle upwelling. The hot material added at spreading centers cools down by conduction and convection of heat as it moves away from the spreading centers. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction usually at an ocean trench. Subduction is the descending component of mantle convection.
This subducted material sinks through the Earth's interior. Some subducted material appears to reach the lower mantle, while in other regions, this material is impeded from sinking further, possibly due to a phase transition from spinel to silicate perovskite and magnesiowustite, an endothermic reaction.
The subducted oceanic crust triggers volcanism, although the basic mechanisms are varied. Volcanism may occur due to processes that add buoyancy to partially melted mantle, which would cause upward flow of the partial melt due to decrease in its density. Secondary convection may cause surface volcanism as a consequence of intraplate extension and mantle plumes. In 1993 it was suggested that inhomogeneities in D" layer have some impact on mantle convection.
Mantle convection causes tectonic plates to move around the Earth's surface.
Types of convection
During the late 20th century, there was significant debate within the geo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does subduction of a plate lead to?
A. bending and volcanism
B. swell and volcanism
C. pressure and volcanism
D. melting and volcanism
Answer:
|
|
sciq-1751
|
multiple_choice
|
What type of muscle is found in the walls of other internal organs such as the stomach?
|
[
"Fiber Muscle",
"Tube Muscle",
"inorganic muscle",
"smooth muscle"
] |
D
|
Relavent Documents:
Document 0:::
The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used.
Structure
It usually has two layers of smooth muscle:
inner and "circular"
outer and "longitudinal"
However, there are some exceptions to this pattern.
In the stomach there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer.
In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle.
In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal.
In the ureter the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer.
The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract:
in the pylorus of the stomach, it forms the pyloric sphincter.
in the anal canal, it forms the internal anal sphincter.
In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli.
The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs.
Function
The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis.
Document 1:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 2:::
This table lists the epithelia of different organs of the human body
Human anatomy
Document 3:::
The internal anal sphincter, IAS, (or sphincter ani internus) is a ring of smooth muscle that surrounds about 2.5–4.0 cm of the anal canal. It is about 5 mm thick, and is formed by an aggregation of the smooth (involuntary) circular muscle fibers of the rectum. it terminates distally about 6 mm from the anal orifice.
The internal anal sphincter aids the sphincter ani externus to occlude the anal aperture and aids in the expulsion of the feces. Its action is entirely involuntary. It is normally in a state of continuous maximal contraction to prevent leakage of faeces or gases. Sympathetic stimulation stimulates and maintains the sphincter's contraction, and parasympathetic stimulation inhibits it. It becomes relaxed in response to distention of the rectal ampulla, requiring voluntary contraction of the puborectalis and external anal sphincter to maintain continence.
Anatomy
The internal anal sphincter is the specialised thickened terminal portion of the inner circular layer of smooth muscle of the large intestine. It extends from the pectinate line (anorectal junction) proximally to just proximal to the anal orifice distally (the distal termination is palpable). Its muscle fibres are arranged in a spiral (rather than a circular) manner.
At its distal extremity, it is in contact with but separate from the external anal sphincter.
Innervation
The sphincter receives extrinsic autonomic innervation via the inferior hypogastric plexus, with sympathetic innervation derived from spinal levels L1-L2, and parasympathetic innervation derived from S2-S4.
The internal anal sphincter is not innervated by the pudendal nerve (which provides motor and sensory innervation to the external anal sphincter).
Function
The sphincter is contracted in its resting state, but reflexively relaxes in certain contexts (most notably during defecation).
Transient relaxation of its proximal portion occurs with rectal distension and post-prandial rectal contraction (the recto-anal inhibitory
Document 4:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of muscle is found in the walls of other internal organs such as the stomach?
A. Fiber Muscle
B. Tube Muscle
C. inorganic muscle
D. smooth muscle
Answer:
|
|
sciq-10244
|
multiple_choice
|
The simplest example of what type of 'organ system' is the gastrovascular cavity found in organisms with only one opening for the process?
|
[
"nervous",
"digestive",
"respiratory",
"cardiovascular"
] |
B
|
Relavent Documents:
Document 0:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 1:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
Document 2:::
This article contains a list of organs of the human body. A general consensus is widely believed to be 79 organs (this number goes up if you count each bone and muscle as an organ on their own, which is becoming more common practice to do); however, there is no universal standard definition of what constitutes an organ, and some tissue groups' status as one is debated. Since there is no single standard definition of what an organ is, the number of organs varies depending on how one defines an organ. For example, this list contains more than 79 organs (about ~103).
It is still not clear which definition of an organ is used for all the organs in this list, it seemed that it may have been compiled based on what wikipedia articles were available on organs.
Musculoskeletal system
Skeleton
Joints
Ligaments
Muscular system
Tendons
Digestive system
Mouth
Teeth
Tongue
Lips
Salivary glands
Parotid glands
Submandibular glands
Sublingual glands
Pharynx
Esophagus
Stomach
Small intestine
Duodenum
Jejunum
Ileum
Large intestine
Cecum
Ascending colon
Transverse colon
Descending colon
Sigmoid colon
Rectum
Liver
Gallbladder
Mesentery
Pancreas
Anal canal
Appendix
Respiratory system
Nasal cavity
Pharynx
Larynx
Trachea
Bronchi
Bronchioles and smaller air passages
Lungs
Muscles of breathing
Urinary system
Kidneys
Ureter
Bladder
Urethra
Reproductive systems
Female reproductive system
Internal reproductive organs
Ovaries
Fallopian tubes
Uterus
Cervix
Vagina
External reproductive organs
Vulva
Clitoris
Male reproductive system
Internal reproductive organs
Testicles
Epididymis
Vas deferens
Prostate
External reproductive organs
Penis
Scrotum
Endocrine system
Pituitary gland
Pineal gland
Thyroid gland
Parathyroid glands
Adrenal glands
Pancreas
Circulatory system
Circulatory system
Heart
Arteries
Veins
Capillaries
Lymphatic system
Lymphatic vessel
Lymph node
Bone marrow
Thymus
Spleen
Gut-associated lymphoid tissue
Tonsils
Interstitium
Nervous system
Central nervous system
Document 3:::
Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems.
The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems.
Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained.
Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem.
Document 4:::
A body orifice is any opening in the body of an animal.
External
In a typical mammalian body such as the human body, the external body orifices are:
The nostrils, for breathing and the associated sense of smell
The mouth, for eating, drinking, breathing, and vocalizations such as speech
The ear canals, for the sense of hearing
The nasolacrimal ducts, to carry tears from the lacrimal sac into the nasal cavity
The anus, for defecation
In males, the urinary meatus, for urination and ejaculation
In females, the urinary meatus, for urination and female ejaculation
In females, the vagina, for menstruation, sexual intercourse and childbirth
The nipple orifices
Other animals may have some other body orifices:
cloaca, in birds, reptiles, amphibians, and some other animals
siphon in mollusk, arthropods, and some other animals
Internal
Internal orifices include the orifices of the outflow tracts of the heart, between the heart valves.
See also
Internal urethral orifice
Mucosa
Mucocutaneous boundary
Meatus
Body cavity
Anatomy
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The simplest example of what type of 'organ system' is the gastrovascular cavity found in organisms with only one opening for the process?
A. nervous
B. digestive
C. respiratory
D. cardiovascular
Answer:
|
|
sciq-2212
|
multiple_choice
|
Most scientists think that ordinary matter makes up how much of the total matter in the universe?
|
[
"greater than half",
"more than half",
"less than half",
"about half"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
Ancient Greece
Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Hong Kong
High schools
In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE).
Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most scientists think that ordinary matter makes up how much of the total matter in the universe?
A. greater than half
B. more than half
C. less than half
D. about half
Answer:
|
|
ai2_arc-280
|
multiple_choice
|
Which organ in a frog has a function similar to the function of lungs in a bird?
|
[
"kidney",
"skin",
"liver",
"heart"
] |
B
|
Relavent Documents:
Document 0:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 1:::
In anatomy, a lobe is a clear anatomical division or extension of an organ (as seen for example in the brain, lung, liver, or kidney) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule, which is a clear division only visible under the microscope.
Interlobar ducts connect lobes and interlobular ducts connect lobules.
Examples of lobes
The four main lobes of the brain
the frontal lobe
the parietal lobe
the occipital lobe
the temporal lobe
The three lobes of the human cerebellum
the flocculonodular lobe
the anterior lobe
the posterior lobe
The two lobes of the thymus
The two and three lobes of the lungs
Left lung: superior and inferior
Right lung: superior, middle, and inferior
The four lobes of the liver
Left lobe of liver
Right lobe of liver
Quadrate lobe of liver
Caudate lobe of liver
The renal lobes of the kidney
Earlobes
Examples of lobules
the cortical lobules of the kidney
the testicular lobules of the testis
the lobules of the mammary gland
the pulmonary lobules of the lung
the lobules of the thymus
Document 2:::
The Uber-anatomy ontology (Uberon) is a comparative anatomy ontology representing a variety of structures found in animals, such as lungs, muscles, bones, feathers and fins. These structures are connected to other structures via relationships such as part-of and develops-from. One of the uses of this ontology is to integrate data from different biological databases, and other species-specific ontologies such as the Foundational Model of Anatomy.
Document 3:::
Instruments used in Anatomy dissections are as follows:
Instrument list
Image gallery
Document 4:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which organ in a frog has a function similar to the function of lungs in a bird?
A. kidney
B. skin
C. liver
D. heart
Answer:
|
|
sciq-1533
|
multiple_choice
|
What are the cells caused that parasites spread through their host?
|
[
"sporozoites",
"prokaryotes",
"fungi spores",
"protists"
] |
A
|
Relavent Documents:
Document 0:::
Parasitology is the study of parasites, their hosts, and the relationship between them. As a biological discipline, the scope of parasitology is not determined by the organism or environment in question but by their way of life. This means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology, bioinformatics, biochemistry, molecular biology, immunology, genetics, evolution and ecology.
Fields
The study of these diverse organisms means that the subject is often broken up into simpler, more focused units, which use common techniques, even if they are not studying the same organisms or diseases. Much research in parasitology falls somewhere between two or more of these definitions. In general, the study of prokaryotes falls under the field of bacteriology rather than parasitology.
Medical
The parasitologist F. E. G. Cox noted that "Humans are hosts to nearly 300 species of parasitic worms and over 70 species of protozoa, some derived from our primate ancestors and some acquired from the animals we have domesticated or come in contact with during our relatively short history on Earth".
One of the largest fields in parasitology, medical parasitology is the subject that deals with the parasites that infect humans, the diseases caused by them, clinical picture and the response generated by humans against them. It is also concerned with the various methods of their diagnosis, treatment and finally their prevention & control.
A parasite is an organism that live on or within another organism called the host.
These include organisms such as:
Plasmodium spp., the protozoan parasite which causes malaria. The four species infective to humans are P. falciparum, P. malariae, P. vivax and P. ovale.
Leishmania, unicellular organisms which cause leishmaniasis
Entamoeba and Giardia, which cause intestinal infections (dysentery and diarrhoea)
Multicellular organisms and intestinal worms (helminths) such as Schistosoma spp., Wuchereri
Document 1:::
Parasite Rex: Inside the Bizarre World of Nature's Most Dangerous Creatures is a nonfiction book by Carl Zimmer that was published by Free Press in 2000. The book discusses the history of parasites on Earth and how the field and study of parasitology formed, along with a look at the most dangerous parasites ever found in nature. A special paperback edition was released in March 2011 for the tenth anniversary of the book's publishing, including a new epilogue written by Zimmer. Signed bookplates were also given to fans that sent in a photo of themselves with a copy of the special edition.
The cover of Parasite Rex includes a scanning electron microscope image of a tick as the focus, along with illustrations in the centerfold of parasites and topics discussed in the book.
Content
The book begins by discussing the history of parasites in human knowledge, from the earliest writings about them in ancient cultures, up through modern times. The focus comes to rest extensively on the views and experiments conducted by scientists in the 17th, 18th, and 19th centuries, such as those done by Antonie van Leeuwenhoek, Japetus Steenstrup, Friedrich Küchenmeister, and Ray Lankester. Among them, Leeuwenhoek was the first to ever physically view cells through a microscope, Steenstrup was the first to explain and confirm the multiple stages and life cycles of parasites that are different from most other living organisms, and Küchenmeister, through his religious beliefs and his views on every creature having a place in the natural order, denied the ideas of his time and proved that all parasites are a part of active evolutionary niches and not biological dead ends by conducting morally ambiguous experiments on prisoners. Lankester is given a specific focus and repeated discussion throughout the book due to his belief that parasites are examples of degenerative evolution, especially in regards to Sacculina, and Zimmer's repeated refutation of this idea.
Several chapters are taken to
Document 2:::
Micronemes are secretory organelles, possessed by parasitic apicomplexans. Micronemes are located on the apical third of the protozoan body. They are surrounded by a typical unit membrane. On electron microscopy they have an electron-dense matrix due to the high protein content. They are specialized secretory organelles important for host-cell invasion and gliding motility.
These organelles secrete several proteins such as the Plasmodium falciparum apical membrane antigen-1, or PfAMA1, and Erythrocyte family antigen, or EBA, family proteins. These proteins specialize in binding to erythrocyte surface receptors and facilitating erythrocyte entry. Only by this initial chemical exchange can the parasite enter into the erythrocyte via actin-myosin motor complex.
It has been posited that this organelle works cooperatively with its counterpart organelle, the rhoptry, which also is a secretory organelle. It is possible that, while the microneme initiates erythrocyte-binding, the rhoptry secretes proteins to create the PVM, or the parasitophorous vacuole membrane, in which the parasite can survive and reproduce.
See also
Dense granule
Rhoptry
Document 3:::
Archaeoparasitology, a multi-disciplinary field within paleopathology, is the study of parasites in archaeological contexts. It includes studies of the protozoan and metazoan parasites of humans in the past, as well as parasites which may have affected past human societies, such as those infesting domesticated animals.
Reinhard suggested that the term "archaeoparasitology" be applied to "... all parasitological remains excavated from archaeological contexts ... derived from human activity" and that "the term 'paleoparasitology' be applied to studies of nonhuman, paleontological material." (p. 233) Paleoparasitology includes all studies of ancient parasites outside of archaeological contexts, such as those found in amber, and even dinosaur parasites.
The first archaeoparasitology report described calcified eggs of Bilharzia haematobia (now Schistosoma haematobium) from the kidneys of an ancient Egyptian mummy. Since then, many fundamental archaeological questions have been answered by integrating our knowledge of the hosts, life cycles and basic biology of parasites, with the archaeological, anthropological and historical contexts in which they are found.
Parasitology basics
Parasites are organisms which live in close association with another organism, called the host, in which the parasite benefits from the association, to the detriment of the host. Many other kinds of associations may exist between two closely allied organisms, such as commensalism or mutualism.
Endoparasites (such as protozoans and helminths), tend to be found inside the host, while ectoparasites (such as ticks, lice and fleas) live on the outside of the host body. Parasite life cycles often require that different developmental stages pass sequentially through multiple host species in order to successfully mature and reproduce. Some parasites are very host-specific, meaning that only one or a few species of hosts are capable of perpetuating their life cycle. Others are not host-spec
Document 4:::
A microbial cyst is a resting or dormant stage of a microorganism, usually a bacterium or a protist or rarely an invertebrate animal, that helps the organism to survive in unfavorable environmental conditions. It can be thought of as a state of suspended animation in which the metabolic processes of the cell are slowed and the cell ceases all activities like feeding and locomotion. Encystment, the formation of the cyst, also helps the microbe to disperse easily, from one host to another or to a more favorable environment. When the encysted microbe reaches an environment favorable to its growth and survival, the cyst wall breaks down by a process known as excystation. In excystment, the exact stimulus is unknown for most protists.
Unfavorable environmental conditions such as lack of nutrients or oxygen, extreme temperatures, lack of moisture and presence of toxic chemicals, which are not conducive for the growth of the microbe trigger the formation of a cyst.
The main functions of cysts are to protect against adverse changes in the environment such as nutrient deficiency, desiccation, adverse pH, and low levels of oxygen, they are sites for nuclear reorganization and cell division, and in parasitic species they are the infectious stage between hosts.
Cyst formation across species
In bacteria
In bacteria (for instance, Azotobacter sp.), encystment occurs by changes in the cell wall; the cytoplasm contracts and the cell wall thickens. Bacterial cysts differ from endospores in the way they are formed and also the degree of resistance to unfavorable conditions. Endospores are much more resistant than cysts.
Bacteria do not always form a single cyst. Varieties of cysts formation events are known. As an example Rhodospirillium centenum can change the number of cell per cyst, usually ranging from four to ten cells per cyst depending on environment.
In protists
Protists, especially protozoan parasites, are often exposed to very harsh conditions at various stages in t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the cells caused that parasites spread through their host?
A. sporozoites
B. prokaryotes
C. fungi spores
D. protists
Answer:
|
|
sciq-11612
|
multiple_choice
|
Reproduction that doesn't involve a male gamete is also known as what?
|
[
"asexual reproduction",
"meiosis",
"mitosis",
"agamogenesis"
] |
D
|
Relavent Documents:
Document 0:::
Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilization.
A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs.
In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity. The use of "male" in regard to sex and gender has been subject to discussion.
Overview
The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction.
Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another.
Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants.
Evolution
The evolution of ani
Document 1:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 2:::
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid ce
Document 3:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
Document 4:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Reproduction that doesn't involve a male gamete is also known as what?
A. asexual reproduction
B. meiosis
C. mitosis
D. agamogenesis
Answer:
|
|
sciq-5538
|
multiple_choice
|
What is in the soil that causes mars to look red?
|
[
"iron",
"sand",
"carbon",
"garnet"
] |
A
|
Relavent Documents:
Document 0:::
Desert varnish or rock varnish is an orange-yellow to black coating found on exposed rock surfaces in arid environments. Desert varnish is approximately one micrometer thick and exhibits nanometer-scale layering. Rock rust and desert patina are other terms which are also used for the condition, but less often.
Formation
Desert varnish forms only on physically stable rock surfaces that are no longer subject to frequent precipitation, fracturing or wind abrasion. The varnish is primarily composed of particles of clay along with oxides of iron and manganese. There is also a host of trace elements and almost always some organic matter. The color of the varnish varies from shades of brown to black.
It has been suggested that desert varnish should be investigated as a potential candidate for a "shadow biosphere". However, a 2008 microscopy study posited that desert varnish has already been reproduced with chemistry not involving life in the lab, and that the main component is actually silica and not clay as previously thought. The study notes that desert varnish is an excellent fossilizer for microbes and indicator of water. Desert varnish appears to have been observed by rovers on Mars, and if examined may contain fossilized life from Mars's wet period.
Composition
Originally scientists thought that the varnish was made from substances drawn out of the rocks it coats. Microscopic and microchemical observations, however, show that a major part of varnish is clay, which could only arrive by wind. Clay, then, acts as a substrate to catch additional substances that chemically react together when the rock reaches high temperatures in the desert sun. Wetting by dew is also important in the process.
An important characteristic of black desert varnish is that it has an unusually high concentration of manganese. Manganese is relatively rare in the Earth's crust, making up only 0.12% of its weight. In black desert varnish, however, manganese is 50 to 60 times more abundan
Document 1:::
In 1976 two identical Viking program landers each carried four types of biological experiments to the surface of Mars. The first successful Mars landers, Viking 1 and Viking 2, then carried out experiments to look for biosignatures of microbial life on Mars. The landers each used a robotic arm to pick up and place soil samples into sealed test containers on the craft.
The two landers carried out the same tests at two places on Mars' surface, Viking 1 near the equator and Viking 2 further north.
The experiments
The four experiments below are presented in the order in which they were carried out by the two Viking landers. The biology team leader for the Viking program was Harold P. Klein (NASA Ames).
Gas chromatograph — mass spectrometer
A gas chromatograph — mass spectrometer (GCMS) is a device that separates vapor components chemically via a gas chromatograph and then feeds the result into a mass spectrometer, which measures the molecular weight of each chemical. As a result, it can separate, identify, and quantify a large number of different chemicals. The GCMS (PI: Klaus Biemann, MIT) was used to analyze the components of untreated Martian soil, and particularly those components that are released as the soil is heated to different temperatures. It could measure molecules present at a level of a few parts per billion.
The GCMS measured no significant amount of organic molecules in the Martian soil. In fact, Martian soils were found to contain less carbon than lifeless lunar soils returned by the Apollo program. This result was difficult to explain if Martian bacterial metabolism was responsible for the positive results seen by the Labeled Release experiment (see below). A 2011 astrobiology textbook notes that this was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil."
Experiments conducted in 2008 by the Phoenix lander discovered the presence o
Document 2:::
Infrared Spectrometer for ExoMars (ISEM) is an infrared spectrometer for remote sensing that is part of the science payload on board the European Space Agency Rosalind Franklin rover, tasked to search for biosignatures and biomarkers on Mars. The rover is planned to be launched in August–October 2022 and land on Mars in spring 2023.
ISEM will provide context assessment of the surface mineralogy in the vicinity of the Rosalind Franklin rover for selection of potential astrobiological targets. The Principal Investigator is Oleg Korablev from the Russian Space Research Institute (IKI).
Overview
The Infrared Spectrometer for ExoMars (ISEM) is being developed by the Russian Space Research Institute (IKI). It will be the first instance of near-infrared spectroscopy (NIR) observations done from the Mars surface. The instrument will be installed on the Rosalind Franklin rover's mast to measure reflected solar radiation in the near infrared range for context assessment of the surface mineralogy in the vicinity of Rosalind Franklin for selection of potential astrobiological targets. As the number of samples obtained with the drill will be limited, the selection of high-value sites for drilling will be crucial. Working with PanCam (a high-resolution panoramic camera), ISEM will aid in the selection of potential targets, especially water-bearing minerals, for close-up investigations and drilling sites.
ISEM could detect, if present, organic compounds, including evolving trace gases such as hydrocarbons like methane in the Martian atmosphere.
Objectives
The stated science objectives of ISEM are:
Geological investigation and study a composition of Martian soils in the uppermost few millimeters of the surface.
Characterisation of the composition of surface materials, discriminating between various classes of silicates, oxides, hydrated minerals and carbonates.
Identification and mapping of the distribution of aqueous alteration products on Mars.
Real-time assessment of
Document 3:::
The Mars Plant Experiment (MPX) was an experiment proposed but not selected for the Mars 2020 rover.
It would have tried to germinate and grow 200 Arabidopsis seeds in a small heated greenhouse using an earth-like atmosphere.
History
The Mars Plant Experiment Started way back in 2000. It last all the way up until 2020 when they launched the most recent Mars Rocket.
Details
This experiment was created with hopes of creating sustainable life on Mars in the future. In a forum held in Washington DC, MPX's Deputy Principal Investigator Heather Smith, from NASA's Ames Research Center, discussed the importance of plants on Mars and the future these flora additions may create for humans on the Red Planet. "In order to do a long-term, sustainable base on Mars, you would want to be able to establish that plants can at least grow on Mars" Smith said. "This would be the first step in that... we just send the seeds there and watch them grow." This experiment would basically setup the ability to create colonies on Mars. Space.com states "MPX would employ a clear "CubeSat" box — the case for a cheap and tiny satellite — which would be affixed to the exterior of the 2020 rover. This box would hold Earth air and about 200 seeds of Arabidopsis, a small flowering plant that's commonly used in scientific research. The seeds would receive water when the rover touched down on Mars, and would then be allowed to grow for two weeks or so." The end goal of the experiment was to have small greenhouses on Mars and prove that plants could grow on Mars. Unfortunately when the decision came down to if this experiment would make it on mission or not it got cut and did not make this current rocket launch. Hopefully in the future it will make it on there.
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is in the soil that causes mars to look red?
A. iron
B. sand
C. carbon
D. garnet
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.