id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-4598
|
multiple_choice
|
What is the sum of the masses of the atoms in a molecule?
|
[
"mass effect",
"molecular mass",
"atomic energy",
"compound mass"
] |
B
|
Relavent Documents:
Document 0:::
The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ).
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused w
Document 1:::
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed.
Units
The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961.
Molecular mass
The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr.
Average mass
The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da.
Mass number
The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons.
Nominal mass
The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon i
Document 2:::
Monoisotopic mass (Mmi) is one of several types of molecular masses used in mass spectrometry. The theoretical monoisotopic mass of a molecule is computed by taking the sum of the accurate masses (including mass defect) of the most abundant naturally occurring stable isotope of each atom in the molecule. For small molecules made up of low atomic number elements the monoisotopic mass is observable as an isotopically pure peak in a mass spectrum. This differs from the nominal molecular mass, which is the sum of the mass number of the primary isotope of each atom in the molecule and is an integer. It also is different from the molar mass, which is a type of average mass. For some atoms like carbon, oxygen, hydrogen, nitrogen, and sulfur, the Mmi of these elements is exactly the same as the mass of its natural isotope, which is the lightest one. However, this does not hold true for all atoms. Iron's most common isotope has a mass number of 56, while the stable isotopes of iron vary in mass number from 54 to 58. Monoisotopic mass is typically expressed in daltons (Da), also called unified atomic mass units (u).
Nominal mass vs monoisotopic mass
Nominal mass
Nominal mass is a term used in high level mass spectrometric discussions, it can be calculated using the mass number of the most abundant isotope of each atom, without regard for the mass defect. For example, when calculating the nominal mass of a molecule of nitrogen (N2) and ethylene (C2H4) it comes out as.
N2
(2*14)= 28 Da
C2H4
(2*12)+(4*1)= 28 Da
What this means, is when using mass spectrometer with insufficient source of power "low resolution" like a quadrupole mass analyser or a quadrupolar ion trap, these two molecules won't be able to be distinguished after ionization, this will be shown by the cross lapping of the m/z peaks. If a high-resolution instrument like an orbitrap or an ion cyclotron resonance is used, these two molecules can be distinguished.
Monoisotopic mass
When calculating
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The dalton or unified atomic mass unit (symbols: Da or u) is a non-SI unit of mass defined as of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. The atomic mass constant, denoted mu, is defined identically, giving .
This unit is commonly used in physics and chemistry to express the mass of atomic-scale objects, such as atoms, molecules, and elementary particles, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of about . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13.
The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the units kilodalton (kDa) and megadalton (MDa). Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total.
The mole is a unit of amount of substance, widely used in chemistry and physics, which was originally defined so that the mass of one mole of a substance, in grams, would be numerically equal to the average mass of one of its constituent particles, in daltons. That is, the molar mass of a chemical compound was meant to be numerically equal to its average molecular mass. For example, the average mass of one molecule of water is about 18.0153 daltons, and one mole of water is about 18.0153 grams. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for almost all practical purposes, it is now only approximate, because of the 2019 redefin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the sum of the masses of the atoms in a molecule?
A. mass effect
B. molecular mass
C. atomic energy
D. compound mass
Answer:
|
|
sciq-3309
|
multiple_choice
|
Living organisms release carbon dioxide into the atmosphere by what method?
|
[
"widespread respiration",
"genomic respiration",
"cellular respiration",
"major respiration"
] |
C
|
Relavent Documents:
Document 0:::
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i
Document 1:::
Carbon sequestration (or carbon storage) is the process of storing carbon in a carbon pool. Carbon sequestration is a naturally occurring process but it can also be enhanced or achieved with technology, for example within carbon capture and storage projects. There are two main types of carbon sequestration: geologic and biologic (also called biosequestration).
Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These changes can be accelerated through changes in land use and agricultural practices, such as converting crop land into land for non-crop fast growing plants. Artificial processes have been devised to produce similar effects, including large-scale, artificial capture and sequestration of industrially produced using subsurface saline aquifers or aging oil fields. Other technologies that work with carbon sequestration include bio-energy with carbon capture and storage, biochar, enhanced weathering, direct air carbon capture and sequestration (DACCS).
Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere. Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts (mineral sequestration). These methods are considered non-volatile because they remove carbon from the atmosphere and sequester it indefinitely and presumably for a considerable duration (thousands to millions of years).
To enhance carbon sequestration processes in oceans the following technologies have been proposed but none have achieved lar
Document 2:::
Community respiration (CR) refers to the total amount of carbon-dioxide that is produced by individuals organisms in a given
community, originating from the cellular respiration of organic material. CR is an important ecological index as it dictates the amount
of production for the higher trophic levels and influence biogeochemical cycles.
CR is often used as a proxy for the biological activity of the microbial community.
Overview
The process of cellular respiration is foundational to the ecological index, community respiration (CR). Cellular respiration can be used to explain relationships between heterotrophic organisms and the autotrophic ones they consume. The process of cellular respiration consists of a series of metabolic reactions using biological material produced by autotrophic organisms, such as oxygen () and glucose (C6H12O6) to turn its chemical energy into adenosine triphosphate (ATP) which can then be used in other metabolic reactions to power the organism, creating carbon dioxide () and water () as a by-product.The overall process of cellular respiration can be summarized with, C6H12O6 + 6O2 → 6CO2 + 6H2O + ATP.
The ATP created during cellular respiration is absolutely necessary for a living being to function as it is the 'Energy currency" of the cell and none of the other metabolic functions could be sustained without it. The process of cellular respiration is an essential component of the Carbon Cycle, which tracks the recycling of carbon through the earth and atmosphere in various compounds such as: CO2 ,H2CO3, HCO3- ,C6H12O6 , CH4 to name a few.
The concentration of carbon dioxide in a given area can act as a proxy indicator for metabolic metabolic function of an individual, or individuals in that area. Since the process of cellular respiration consumes oxygen and produces carbon dioxide the amount of carbon dioxide can be used to infer the amount of oxygen used in the environment specifically for metabolic requirements. Since cellular respi
Document 3:::
The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks.
To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change.
Main compartments
The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g
Document 4:::
Soil respiration refers to the production of carbon dioxide when soil organisms respire. This includes respiration of plant roots, the rhizosphere, microbes and fauna.
Soil respiration is a key ecosystem process that releases carbon from the soil in the form of CO2. CO2 is acquired by plants from the atmosphere and converted into organic compounds in the process of photosynthesis. Plants use these organic compounds to build structural components or respire them to release energy. When plant respiration occurs below-ground in the roots, it adds to soil respiration. Over time, plant structural components are consumed by heterotrophs. This heterotrophic consumption releases CO2 and when this CO2 is released by below-ground organisms, it is considered soil respiration.
The amount of soil respiration that occurs in an ecosystem is controlled by several factors. The temperature, moisture, nutrient content and level of oxygen in the soil can produce extremely disparate rates of respiration. These rates of respiration can be measured in a variety of methods. Other methods can be used to separate the source components, in this case the type of photosynthetic pathway (C3/C4), of the respired plant structures.
Soil respiration rates can be largely affected by human activity. This is because humans have the ability to and have been changing the various controlling factors of soil respiration for numerous years. Global climate change is composed of numerous changing factors including rising atmospheric CO2, increasing temperature and shifting precipitation patterns. All of these factors can affect the rate of global soil respiration. Increased nitrogen fertilization by humans also has the potential to affect rates over the entire planet.
Soil respiration and its rate across ecosystems is extremely important to understand. This is because soil respiration plays a large role in global carbon cycling as well as other nutrient cycles. The respiration of plant structures releases
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Living organisms release carbon dioxide into the atmosphere by what method?
A. widespread respiration
B. genomic respiration
C. cellular respiration
D. major respiration
Answer:
|
|
sciq-753
|
multiple_choice
|
What are needed to oxidize the noble gases to form compounds in positive oxidation states?
|
[
"potent oxidants",
"bacteria oxidants",
"protein oxidants",
"metal oxidants"
] |
A
|
Relavent Documents:
Document 0:::
An inert gas is a gas that does not readily undergo chemical reactions with other chemical substances and therefore does not readily form chemical compounds. The noble gases often do not react with many substances and were historically referred to as the inert gases. Inert gases are used generally to avoid unwanted chemical reactions degrading a sample. These undesirable chemical reactions are often oxidation and hydrolysis reactions with the oxygen and moisture in air. The term inert gas is context-dependent because several of the noble gases can be made to react under certain conditions.
Purified argon gas is the most commonly used inert gas due to its high natural abundance (78.3% N2, 1% Ar in air) and low relative cost.
Unlike noble gases, an inert gas is not necessarily elemental and is often a compound gas. Like the noble gases, the tendency for non-reactivity is due to the valence, the outermost electron shell, being complete in all the inert gases. This is a tendency, not a rule, as all noble gases and other "inert" gases can react to form compounds under some conditions.
Need and necessity
The inert gases are obtained by fractional distillation of air, with the exception of helium which is separated from a few natural gas sources rich in this element, through cryogenic distillation or membrane separation. For specialized applications, purified inert gas shall be produced by specialized generators on-site. They are often used by chemical tankers and product carriers (smaller vessels). Benchtop specialized generators are also available for laboratories.
Applications on inert gas
Because of the non-reactive properties of inert gases, they are often useful to prevent undesirable chemical reactions from taking place. Food is packed in an inert gas to remove oxygen gas. This prevents bacteria from growing. It also prevents chemical oxidation by oxygen in normal air. An example is the rancidification (caused by oxidation) of edible oils. In food packaging, ine
Document 1:::
Nitrogen dioxide is a chemical compound with the formula and is one of several nitrogen oxides. is an intermediate in the industrial synthesis of nitric acid, millions of tons of which are produced each year for use (primarily in the production of fertilizers). At higher temperatures, nitrogen dioxide is a reddish-brown gas. It can be fatal if inhaled in large quantities. The LC50 (median lethal dose) for humans has been estimated to be 174 ppm for a 1-hour exposure. Nitrogen dioxide is a paramagnetic, bent molecule with C2v point group symmetry.
It is included in the NOx family of atmospheric pollutants.
Properties
Nitrogen dioxide is a reddish-brown gas with a pungent, acrid odor above and becomes a yellowish-brown liquid below . It forms an equilibrium with its dimer, dinitrogen tetroxide (), and converts almost entirely to below .
The bond length between the nitrogen atom and the oxygen atom is 119.7 pm. This bond length is consistent with a bond order between one and two.
Unlike ozone () the ground electronic state of nitrogen dioxide is a doublet state, since nitrogen has one unpaired electron, which decreases the alpha effect compared with nitrite and creates a weak bonding interaction with the oxygen lone pairs. The lone electron in also means that this compound is a free radical, so the formula for nitrogen dioxide is often written as .
The reddish-brown color is a consequence of preferential absorption of light in the blue region of the spectrum (400–500 nm), although the absorption extends throughout the visible (at shorter wavelengths) and into the infrared (at longer wavelengths). Absorption of light at wavelengths shorter than about 400 nm results in photolysis (to form , atomic oxygen); in the atmosphere the addition of the oxygen atom so formed to results in ozone.
Preparation
Nitrogen dioxide typically arises via the oxidation of nitric oxide by oxygen in air (e.g. as result of corona discharge):
+
Nitrogen dioxide is formed in m
Document 2:::
Nitrous acid (molecular formula ) is a weak and monoprotic acid known only in solution, in the gas phase and in the form of nitrite () salts. Nitrous acid is used to make diazonium salts from amines. The resulting diazonium salts are reagents in azo coupling reactions to give azo dyes.
Structure
In the gas phase, the planar nitrous acid molecule can adopt both a syn and an anti form. The anti form predominates at room temperature, and IR measurements indicate it is more stable by around 2.3 kJ/mol.
Preparation
Nitrous acid is usually generated by acidification of aqueous solutions of sodium nitrite with a mineral acid. The acidification is usually conducted at ice temperatures, and the HNO2 is consumed in situ. Free nitrous acid is unstable and decomposes rapidly.
Nitrous acid can also be produced by dissolving dinitrogen trioxide in water according to the equation
N2O3 + H2O → 2 HNO2
Reactions
Nitrous acid is the main chemphore in the Liebermann reagent, used to spot-test for alkaloids.
Decomposition
Gaseous nitrous acid, which is rarely encountered, decomposes into nitrogen dioxide, nitric oxide, and water:
2 HNO2 → NO2 + NO + H2O
Nitrogen dioxide disproportionates into nitric acid and nitrous acid in aqueous solution:
2 NO2 + H2O → HNO3 + HNO2
In warm or concentrated solutions, the overall reaction amounts to production of nitric acid, water, and nitric oxide:
3 HNO2 → HNO3 + 2 NO + H2O
The nitric oxide can subsequently be re-oxidized by air to nitric acid, making the overall reaction:
2 HNO2 + O2 → 2 HNO3
Reduction
With I− and Fe2+ ions, NO is formed:
2 HNO2 + 2 KI + 2 H2SO4 → I2 + 2 NO + 2 H2O + 2 K2SO4
2 HNO2 + 2 FeSO4 + 2 H2SO4 → Fe2(SO4)3 + 2 NO + 2 H2O + K2SO4
With Sn2+ ions, N2O is formed:
Document 3:::
Classification
Oxidoreductases are classified as EC 1 in the EC number classification of enzymes. Oxidoreductases can be further classified into 21 subclasses:
EC 1.1 includes oxidoreductases that act on the CH-OH group of donors (alcohol oxidoreductases such as methanol dehydrogenase)
EC 1.2 includes oxidoreductases that act on the aldehyde or oxo group of donors
EC 1.3 includes oxidoreductases that act on the CH-CH group of donors (CH-CH oxidore
Document 4:::
Dioxygen complexes are coordination compounds that contain O2 as a ligand. The study of these compounds is inspired by oxygen-carrying proteins such as myoglobin, hemoglobin, hemerythrin, and hemocyanin. Several transition metals form complexes with O2, and many of these complexes form reversibly. The binding of O2 is the first step in many important phenomena, such as cellular respiration, corrosion, and industrial chemistry. The first synthetic oxygen complex was demonstrated in 1938 with cobalt(II) complex reversibly bound O2.
Mononuclear complexes of O2
O2 binds to a single metal center either "end-on" (η1-) or "side-on" (η2-). The bonding and structures of these compounds are usually evaluated by single-crystal X-ray crystallography, focusing both on the overall geometry as well as the O–O distances, which reveals the bond order of the O2 ligand.
Complexes of η1-O2 ligands
O2 adducts derived from cobalt(II) and iron(II) complexes of porphyrin (and related anionic macrocyclic ligands) exhibit this bonding mode. Myoglobin and hemoglobin are famous examples, and many synthetic analogues have been described that behave similarly. Binding of O2 is usually described as proceeding by electron transfer from the metal(II) center to give superoxide () complexes of metal(III) centers. As shown by the mechanisms of cytochrome P450 and alpha-ketoglutarate-dependent hydroxylase, Fe-η1-O2 bonding is conducive to formation of Fe(IV) oxo centers. O2 can bind to one metal of a bimetallic unit via the same modes discussed above for mononuclear complexes. A well-known example is the active site of the protein hemerythrin, which features a diiron carboxylate that binds O2 at one Fe center. Dinuclear complexes can also cooperate in the binding, although the initial attack of O2 probably occurs at a single metal.
Complexes of η2-O2 ligands
η2-bonding is the most common motif seen in coordination chemistry of dioxygen. Such complexes can be generated by treating low-valent me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are needed to oxidize the noble gases to form compounds in positive oxidation states?
A. potent oxidants
B. bacteria oxidants
C. protein oxidants
D. metal oxidants
Answer:
|
|
sciq-2655
|
multiple_choice
|
What is the bulb called in the frontal lobe that processes smells?
|
[
"olfactory bulb",
"auditory bulb",
"sensory bulb",
"peripheral bulb"
] |
A
|
Relavent Documents:
Document 0:::
The olfactory tubercle (OT), also known as the tuberculum olfactorium, is a multi-sensory processing center that is contained within the olfactory cortex and ventral striatum and plays a role in reward cognition. The OT has also been shown to play a role in locomotor and attentional behaviors, particularly in relation to social and sensory responsiveness, and it may be necessary for behavioral flexibility. The OT is interconnected with numerous brain regions, especially the sensory, arousal, and reward centers, thus making it a potentially critical interface between processing of sensory information and the subsequent behavioral responses.
The OT is a composite structure that receives direct input from the olfactory bulb and contains the morphological and histochemical characteristics of the ventral pallidum and the striatum of the forebrain. The dopaminergic neurons of the mesolimbic pathway project onto the GABAergic medium spiny neurons of the nucleus accumbens and olfactory tubercle (receptor D3 is abundant in these two areas ). In addition, the OT contains tightly packed cell clusters known as the islands of Calleja, which consist of granule cells. Even though it is part of the olfactory cortex and receives direct input from the olfactory bulb, it has not been shown to play a role in processing of odors.
Structure
The olfactory tubercle differs in location and relative size between humans, other primates, rodents, birds, and other animals. In most cases, the olfactory tubercle is identified as a round bulge along the basal forebrain anterior to the optic chiasm and posterior to the olfactory peduncle. In humans and other primates, visual identification of the olfactory tubercle is not easy because the basal forebrain bulge is small in these animals. With regard to functional anatomy, the olfactory tubercle can be considered to be a part of three larger networks. First, it is considered to be part of the basal forebrain, the nucleus accumbens, and the amygdalo
Document 1:::
Olfactory glands, also known as Bowman's glands, are a type of nasal gland situated in the part of the olfactory mucosa beneath the olfactory epithelium, that is the lamina propria, a connective tissue also containing fibroblasts, blood vessels and bundles of fine axons from the olfactory neurons.
An olfactory gland consists of an acinus in the lamina propria and a secretory duct going out through the olfactory epithelium.
Electron microscopy studies show that olfactory glands contain cells with large secretory vesicles. Olfactory glands secrete the gel-forming mucin protein MUC5B. They might secrete proteins such as lactoferrin, lysozyme, amylase and IgA, similarly to serous glands. The exact composition of the secretions from olfactory glands is unclear, but there is evidence that they produce odorant-binding protein.
Function
The olfactory glands are tubuloalveolar glands surrounded by olfactory receptors and sustentacular cells in the olfactory epithelium. These glands produce mucous to lubricate the olfactory epithelium and dissolve odorant-containing gases. Several olfactory binding proteins are produced from the olfactory glands that help facilitate the transportation of odorants to the olfactory receptors. These cells exhibit the mRNA to transform growth factor α, stimulating the production of new olfactory receptor cells.
See also
William Bowman
List of distinct cell types in the adult human body
Document 2:::
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
The temporal lobe is involved in processing sensory input into derived meanings for the appropriate retention of visual memory, language comprehension, and emotion association.
Temporal refers to the head's temples.
Structure
The temporal lobe consists of structures that are vital for declarative or long-term memory. Declarative (denotative) or explicit memory is conscious memory divided into semantic memory (facts) and episodic memory (events). Medial temporal lobe structures that are critical for long-term memory include the hippocampus, along with the surrounding hippocampal region consisting of the perirhinal, parahippocampal, and entorhinal neocortical regions. The hippocampus is critical for memory formation, and the surrounding medial temporal cortex is currently theorized to be critical for memory storage. The prefrontal and visual cortices are also involved in explicit memory.
Research has shown that lesions in the hippocampus of monkeys results in limited impairment of function, whereas extensive lesions that include the hippocampus and the medial temporal cortex result in severe impairment.
Function
Visual memories
The temporal lobe communicates with the hippocampus and plays a key role in the formation of explicit long-term memory modulated by the amygdala.
Processing sensory input
Auditory Adjacent areas in the superior, posterior, and lateral parts of the temporal lobes are involved in high-level auditory processing. The temporal lobe is involved in primary auditory perception, such as hearing, and holds the primary auditory cortex. The primary auditory cortex receives sensory information from the ears and secondary areas process the information into meaningful units such as speech and words. The superior temporal gyrus includes an area (wit
Document 3:::
Sniffing is a perceptually-relevant behavior, defined as the active sampling of odors through the nasal cavity for the purpose of information acquisition. This behavior, displayed by all terrestrial vertebrates, is typically identified based upon changes in respiratory frequency and/or amplitude, and is often studied in the context of odor guided behaviors and olfactory perceptual tasks. Sniffing is quantified by measuring intra-nasal pressure or flow or air or, while less accurate, through a strain gauge on the chest to measure total respiratory volume. Strategies for sniffing behavior vary depending upon the animal, with small animals (rats, mice, hamsters) displaying sniffing frequencies ranging from 4 to 12 Hz but larger animals (humans) sniffing at much lower frequencies, usually less than 2 Hz. Subserving sniffing behaviors, evidence for an "olfactomotor" circuit in the brain exists, wherein perception or expectation of an odor can trigger brain respiratory center to allow for the modulation of sniffing frequency and amplitude and thus acquisition of odor information. Sniffing is analogous to other stimulus sampling behaviors, including visual saccades, active touch, and whisker movements in small animals (viz., whisking). Atypical sniffing has been reported in cases of neurological disorders, especially those disorders characterized by impaired motor function and olfactory perception.
Background and history of sniffing
Background
The behavior of sniffing incorporates changes in air flow within the nose. This can involve changes in the depth of inhalation and the frequency of inhalations. Both of these entail modulations in the manner whereby air flows within the nasal cavity and through the nostrils. As a consequence, when the air being breathed is odorized, odors can enter and leave the nasal cavity with each sniff. The same applies regardless of what gas is being inhaled, including toxins and solvents, and other industrial chemicals which may be inh
Document 4:::
The olfactory tract is a bilateral bundle of afferent nerve fibers from the mitral and tufted cells of the olfactory bulb that connects to several target regions in the brain, including the piriform cortex, amygdala, and entorhinal cortex. It is a narrow white band, triangular on coronal section, the apex being directed upward.
Structure
The olfactory tract and olfactory bulb lie in the olfactory sulcus a sulcus formed by the medial orbital gyrus on the inferior surface of each frontal lobe. The olfactory tracts lie in the sulci which run closely parallel to the midline. Fibers of the olfactory tract appear to end in the antero-lateral part of the olfactory tubercle, the dorsal and external parts of the anterior olfactory nucleus, the frontal and temporal parts of the prepyriform area, the cortico-medial group of amygdala nuclei and the nucleus of the stria terminalis.
The olfactory tract divides posteriorly into a medial and a lateral stria. Caudal to this is the olfactory trigone, and the anterior perforated substance.
Medial olfactory stria
The medial olfactory stria turns medially behind the parolfactory area and ends in the subcallosal gyrus; in some cases a small intermediate stria is seen running backward to the anterior perforated substance.
Lateral olfactory stria
The lateral olfactory stria is directed across the lateral part of the anterior perforated substance and then bends abruptly medially toward the uncus of the parahippocampal gyrus.
Clinical significance
Destruction to the olfactory tract results in ipsilateral anosmia (loss of the ability to smell). Anosmia either total or partial is a symptom of Kallmann syndrome a genetic disorder that results in disruption of the development of the olfactory tract. The depth of the olfactory sulcus is an indicator of such congenital anosmia.
Additional images
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the bulb called in the frontal lobe that processes smells?
A. olfactory bulb
B. auditory bulb
C. sensory bulb
D. peripheral bulb
Answer:
|
|
sciq-7611
|
multiple_choice
|
Weather maps show storms, air masses, and what?
|
[
"currents",
"regions",
"fronts",
"patterns"
] |
C
|
Relavent Documents:
Document 0:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 1:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 2:::
The Todd Weather Folios are a collection of continental Australian synoptic charts that were published from 1879 to 1909.
The charts were created by Sir Charles Todd's office at the Adelaide Observatory. In addition to the charts, the folios include clippings of newspaper articles and telegraphic and handwritten information about the weather. The area covered is mainly the east and south-east of Australia, with occasional reference to other parts of Australasia and the world.
The maps are bound into approximately six-month folios, 63 of which cover the entire period. There are approximately 10,000 continental weather maps along with 750 rainfall maps for South Australia, 10 million printed words of news text, and innumerable handwritten observations and correspondences about the weather.
The folios are an earlier part of the National Archives of Australia listed collection series number D1384.
The History of the Folios
With the advent of the telegraph it was possible to simultaneously collect data, such as surface temperature and sea-level pressure, to draw synoptic weather charts. With Charles Todd's appointment as Postmaster General to the Colony, he trained not only his telegraph operators, but also his postmasters as weather observers. These observers provided valuable data points that, in combination with telegraphed observations from the other colonies (including New Zealand), showed the development and progress of weather activity across a large part of the Southern Hemisphere. Todd's best known feat was his construction management of the Overland Telegraph from Adelaide to Port Darwin. This line of communication was critical to his capacity to create continent-wide synoptic charts as the telegraphic observations from the Outback enabled the connection of data points on the east coast of Australia with similar data points on the west and southern coasts. These continent-scale isobaric lines allowed Todd and his staff to draw synoptic charts that in the
Document 3:::
Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to ab
Document 4:::
Storm spotting is a form of weather spotting in which observers watch for the approach of severe weather, monitor its development and progression, and actively relay their findings to local authorities.
History
Storm spotting developed in the United States during the early 1940s. A joint project between the military and the weather bureau saw the deployment of trained military and aviation lightning spotters in areas where ammunitions for the war were manufactured. During 1942, a serious tornado struck a key operations center in Oklahoma and another tornado on May 15, 1943 destroyed parts of the Fort Riley military base located in Kansas. After these two events and a string of other tornado outbreaks, spotter networks became commonplace, and it is estimated that there were over 200 networks by 1945. Their mandate had also changed to include reporting all types of active or severe weather; this included giving snow depth and other reports during the winter as well as fire reports in the summer, along with the more typical severe weather reports associated with thunderstorms. However, spotting was still mainly carried out by trained individuals in either the military, aviation, or law enforcement fields of service. It was not until 1947 that volunteer spotting, as it exists today, was born.
After a series of vicious tornado outbreaks hit the state of Texas in 1947, the state placed special emphasis on volunteer spotting, and the local weather offices began to offer basic training classes to the general public. Spotting required the delivery of timely information so that warnings could be issued as quickly as possible, thus civilian landline phone calls and amateur radio operators provided the most efficient and fastest means of communication. While phone lines were reliable to a degree, a common problem was the loss of service when an approaching storm damaged phone lines in its path. This eventually led to amateur radio becoming the predominant means of communicat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Weather maps show storms, air masses, and what?
A. currents
B. regions
C. fronts
D. patterns
Answer:
|
|
sciq-6935
|
multiple_choice
|
What celestial body in the solar system makes up most of its total mass?
|
[
"sun",
"Jupiter",
"Andromeda",
"earth"
] |
A
|
Relavent Documents:
Document 0:::
This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun.
Star
The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System.
Planets
In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless.
According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System.
Dwarf planets
Dwarf planets are bodies orbiting the Sun that are massive and warm eno
Document 1:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 2:::
In astronomy, planetary mass is a measure of the mass of a planet-like astronomical object. Within the Solar System, planets are usually measured in the astronomical system of units, where the unit of mass is the solar mass (), the mass of the Sun. In the study of extrasolar planets, the unit of measure is typically the mass of Jupiter () for large gas giant planets, and the mass of Earth () for smaller rocky terrestrial planets.
The mass of a planet within the Solar System is an adjusted parameter in the preparation of ephemerides. There are three variations of how planetary mass can be calculated:
If the planet has natural satellites, its mass can be calculated using Newton's law of universal gravitation to derive a generalization of Kepler's third law that includes the mass of the planet and its moon. This permitted an early measurement of Jupiter's mass, as measured in units of the solar mass.
The mass of a planet can be inferred from its effect on the orbits of other planets. In 1931-1948 flawed applications of this method led to incorrect calculations of the mass of Pluto.
Data from influence collected from the orbits of space probes can be used. Examples include Voyager probes to the outer planets and the MESSENGER spacecraft to Mercury.
Also, numerous other methods can give reasonable approximations. For instance, Varuna, a potential dwarf planet, rotates very quickly upon its axis, as does the dwarf planet Haumea. Haumea has to have a very high density in order not to be ripped apart by centrifugal forces. Through some calculations, one can place a limit on the object's density. Thus, if the object's size is known, a limit on the mass can be determined. See the links in the aforementioned articles for more details on this.
Choice of units
The choice of solar mass, , as the basic unit for planetary mass comes directly from the calculations used to determine planetary mass. In the most precise case, that of the Earth itself, the mass is known in term
Document 3:::
The Sweden Solar System is the world's largest permanent scale model of the Solar System. The Sun is represented by the Avicii Arena in Stockholm, the second-largest hemispherical building in the world. The inner planets can also be found in Stockholm but the outer planets are situated northward in other cities along the Baltic Sea. The system was started by Nils Brenning, professor at the Royal Institute of Technology in Stockholm, and Gösta Gahm, professor at the Stockholm University. The model represents the Solar System on the scale of 1:20 million.
The system
The bodies represented in this model include the Sun, the planets (and some of their moons), dwarf planets and many types of small bodies (comets, asteroids, trans-Neptunians, etc.), as well as some abstract concepts (like the Termination Shock zone). Because of the existence of many small bodies in the real Solar System, the model can always be further increased.
The Sun is represented by the Avicii Arena (Globen), Stockholm, which is the second-largest hemispherical building in the world, in diameter. To respect the scale, the globe represents the Sun including its corona.
Inner planets
Mercury ( in diameter) is placed at Stockholm City Museum, from the Globe. The small metallic sphere was built by the artist Peter Varhelyi.
Venus ( in diameter) is placed at Vetenskapens Hus at KTH (Royal Institute of Technology), from the Globe. The previous model, made by the United States artist Daniel Oberti, was inaugurated on 8 June 2004, during a Venus transit and placed at KTH. It fell and shattered around 11 June 2011. Due to construction work at the location of the previous model of Venus it was removed and as of October 2012 cannot be seen. The current model now at Vetenskapens Hus was previously located at the Observatory Museum in Stockholm (now closed).
Earth ( in diameter) is located at the Swedish Museum of Natural History (Cosmonova), from the Globe. Satellite images of the Earth are exhibited
Document 4:::
In astronomy, minimum mass is the lower-bound calculated mass of observed objects such as planets, stars and binary systems, nebulae, and black holes.
Minimum mass is a widely cited statistic for extrasolar planets detected by the radial velocity method or Doppler spectroscopy, and is determined using the binary mass function. This method reveals planets by measuring changes in the movement of stars in the line-of-sight, so the real orbital inclinations and true masses of the planets are generally unknown. This is a result of sin i degeneracy.
If inclination i can be determined, the true mass can be obtained from the calculated minimum mass using the following relationship:
Exoplanets
Orientation of the transit to Earth
Most stars will not have their planets lined up and orientated so that they eclipse over the center of the star and give the viewer on earth a perfect transit. It is for this reason that when we often are only able to extrapolate a minimum mass when viewing a star's wobble because we do not know the inclination and therefore only be able to calculate the part pulling the star on the plane of celestial sphere.
For orbiting bodies in extrasolar planetary systems, an inclination of 0° or 180° corresponds to a face-on orbit (which cannot be observed by radial velocity), whereas an inclination of 90° corresponds to an edge-on orbit (for which the true mass equals the minimum mass).
Planets with orbits highly inclined to the line of sight from Earth produce smaller visible wobbles, and are thus more difficult to detect. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass (). This is called Sin i degeneracy. The posterior distribution of the inclination angle i depends on the true mass distribution of the planets.
Radial velocity method
However, when there are multipl
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What celestial body in the solar system makes up most of its total mass?
A. sun
B. Jupiter
C. Andromeda
D. earth
Answer:
|
|
sciq-5454
|
multiple_choice
|
Biochemical reactions are optimal at physiological temperatures because many what lose function at lower and higher temperatures?
|
[
"carbohydrates",
"enzymes",
"neurons",
"hormones"
] |
B
|
Relavent Documents:
Document 0:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 1:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 2:::
An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds.
Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as:
P = C - R - U - F or
P = C - (R + U + F) or
C = P + R + U + F
All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ).
Energy used for metabolism will be
R = C - (F + U + P)
Energy used in the maintenance will be
R + F + U = C - P
Endothermy and ectothermy
Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms.
Document 3:::
Isothermal microcalorimetry (IMC) is a laboratory method for real-time monitoring and dynamic analysis of chemical, physical and biological processes. Over a period of hours or days, IMC determines the onset, rate, extent and energetics of such processes for specimens in small ampoules (e.g. 3–20 ml) at a constant set temperature (c. 15 °C–150 °C).
IMC accomplishes this dynamic analysis by measuring and recording vs. elapsed time the net rate of heat flow (μJ/s = μW) to or from the specimen ampoule, and the cumulative amount of heat (J) consumed or produced.
IMC is a powerful and versatile analytical tool for four closely related reasons:
All chemical and physical processes are either exothermic or endothermic—produce or consume heat.
The rate of heat flow is proportional to the rate of the process taking place.
IMC is sensitive enough to detect and follow either slow processes (reactions proceeding at a few % per year) in a few grams of material, or processes which generate minuscule amounts of heat (e.g. metabolism of a few thousand living cells).
IMC instruments generally have a huge dynamic range—heat flows as low as ca. 1 μW and as high as ca. 50,000 μW can be measured by the same instrument.
The IMC method of studying rates of processes is thus broadly applicable, provides real-time continuous data, and is sensitive. The measurement is simple to make, takes place unattended and is non-interfering (e.g. no fluorescent or radioactive markers are needed).
However, there are two main caveats that must be heeded in use of IMC:
Missed data: If externally prepared specimen ampoules are used, it takes ca. 40 minutes to slowly introduce an ampoule into the instrument without significant disturbance of the set temperature in the measurement module. Thus any processes taking place during this time are not monitored.
Extraneous data: IMC records the aggregate net heat flow produced or consumed by all processes taking place within an ampoule. Therefore, in order
Document 4:::
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Biochemical reactions are optimal at physiological temperatures because many what lose function at lower and higher temperatures?
A. carbohydrates
B. enzymes
C. neurons
D. hormones
Answer:
|
|
sciq-5920
|
multiple_choice
|
In what part of the lungs is pulmonary gas exchanged?
|
[
"trachea",
"bronchi",
"alveoli",
"bronchioles"
] |
C
|
Relavent Documents:
Document 0:::
Pulmonary pathology is the subspecialty of surgical pathology which deals with the diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery (VATS). The diagnosis of inflammatory or fibrotic diseases of the lungs is considered by many pathologists to be particularly challenging.
Anatomical pathology
Document 1:::
Lung receptors sense irritation or inflammation in the bronchi and alveoli.
Document 2:::
The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration.
The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate.
The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center.
Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group.
Dorsal respiratory group – in the medulla
Ventral respiratory group – in the medulla
Pneumotaxic center – various nuclei of the pons
Apneustic center – nucleus of the pons
From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs.
Control of respiratory rhythm
Ventilatory pattern
Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh
Document 3:::
Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics.
Speech production
The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation).
Respiration
Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by
Document 4:::
Exhalation (or expiration) is the flow of the breath out of an organism. In animals, it is the movement of air from the lungs out of the airways, to the external environment during breathing.
This happens due to elastic properties of the lungs, as well as the internal intercostal muscles which lower the rib cage and decrease thoracic volume. As the thoracic diaphragm relaxes during exhalation it causes the tissue it has depressed to rise superiorly and put pressure on the lungs to expel the air. During forced exhalation, as when blowing out a candle, expiratory muscles including the abdominal muscles and internal intercostal muscles generate abdominal and thoracic pressure, which forces air out of the lungs.
Exhaled air is 4% carbon dioxide, a waste product of cellular respiration during the production of energy, which is stored as ATP. Exhalation has a complementary relationship to inhalation which together make up the respiratory cycle of a breath.
Exhalation and gas exchange
The main reason for exhalation is to rid the body of carbon dioxide, which is the waste product of gas exchange in humans. Air is brought into the body through inhalation. During this process air is taken in by the lungs. Diffusion in the alveoli allows for the exchange of O2 into the pulmonary capillaries and the removal of CO2 and other gases from the pulmonary capillaries to be exhaled. In order for the lungs to expel air the diaphragm relaxes, which pushes up on the lungs. The air then flows through the trachea then through the larynx and pharynx to the nasal cavity and oral cavity where it is expelled out of the body. Exhalation takes longer than inhalation and it is believed to facilitate better exchange of gases. Parts of the nervous system help to regulate respiration in humans. The exhaled air is not just carbon dioxide; it contains a mixture of other gases. Human breath contains volatile organic compounds (VOCs). These compounds consist of methanol, isoprene, acetone,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In what part of the lungs is pulmonary gas exchanged?
A. trachea
B. bronchi
C. alveoli
D. bronchioles
Answer:
|
|
sciq-1479
|
multiple_choice
|
In the body, oxygen is used by cells of the body’s tissues and carbon dioxide is produced as what?
|
[
"food",
"fuel",
"waste product",
"oxygen"
] |
C
|
Relavent Documents:
Document 0:::
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration.
Each pathway generates different waste products.
Aerobic respiration
When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules.
Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP
Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen.
In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant.
Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle.
The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted.
Anaerobic respiration
Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that
Document 1:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 2:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 3:::
The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs).
The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1]
Types of organism by carbon source
Heterotrophs
Autotrophs
Document 4:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In the body, oxygen is used by cells of the body’s tissues and carbon dioxide is produced as what?
A. food
B. fuel
C. waste product
D. oxygen
Answer:
|
|
sciq-5943
|
multiple_choice
|
At what time of the year can tornadoes occur?
|
[
"spring",
"summer",
"winter",
"any"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 4:::
FSU Young Scholars Program (YSP) is a six-week residential science and mathematics summer program for 40 high school students from Florida, USA, with significant potential for careers in the fields of science, technology, engineering and mathematics. The program was developed in 1983 and is currently administered by the Office of Science Teaching Activities in the College of Arts and Sciences at Florida State University (FSU).
Academic program
Each young scholar attends three courses in the fields of mathematics, science and computer programming. The courses are designed specifically for this program — they are neither high school nor college courses.
Research
Each student who attends YSP is assigned an independent research project (IRP) based on his or her interests. Students join the research teams of FSU professors, participating in scientific research for two days each week. The fields of study available include robotics, molecular biology, chemistry, geology, physics and zoology. At the conclusion of the program, students present their projects in an academic conference, documenting their findings and explaining their projects to both students and faculty.
Selection process
YSP admits students who have completed the eleventh grade in a Florida public or private high school. A few exceptionally qualified and mature tenth graders have been selected in past years, though this is quite rare.
All applicants must have completed pre-calculus and maintain at least a 3.0 unweighted GPA to be considered for acceptance. Additionally, students must have scored at the 90th percentile or better in science or mathematics on a nationally standardized exam, such as the SAT, PSAT, ACT or PLAN. Students are required to submit an application package, including high school transcripts and a letter of recommendation.
Selection is extremely competitive, as there are typically over 200 highly qualified applicants competing for only 40 positions. The majority of past participant
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
At what time of the year can tornadoes occur?
A. spring
B. summer
C. winter
D. any
Answer:
|
|
sciq-1966
|
multiple_choice
|
What type of structures evolved to do the same job by unrelated organisms?
|
[
"analogous structures",
"symmetrical structures",
"dioxide structures",
"primal structures"
] |
A
|
Relavent Documents:
Document 0:::
Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology is an anthology published in 2003 edited by Gerd B. Müller and Stuart A. Newman. The book is the outcome of the 4th Altenberg Workshop in Theoretical Biology on "Origins of Organismal Form: Beyond the Gene Paradigm", hosted in 1999 at the Konrad Lorenz Institute for Evolution and Cognition Research. It has been cited over 200 times and has a major influence on extended evolutionary synthesis research.
Description of the book
The book explores the multiple factors that may have been responsible for the origination of biological form in multicellular life. These biological forms include limbs, segmented structures, and different body symmetries.
It explores why the basic body plans of nearly all multicellular life arose in the relatively short time span of the Cambrian Explosion. The authors focus on physical factors (structuralism) other than changes in an organism's genome that may have caused multicellular life to form new structures. These physical factors include differential adhesion of cells and feedback oscillations between cells.
The book also presents recent experimental results that examine how the same embryonic tissues or tumor cells can be coaxed into forming dramatically different structures under different environmental conditions.
One of the goals of the book is to stimulate research that may lead to a more comprehensive theory of evolution. It is frequently cited as foundational to the development of the extended evolutionary synthesis.
List of contributions
Origination of Organismal Form: The Forgotten Cause in Evolutionary Theory, Gerd B. Müller and Stuart A. Newman
The Cambrian "Explosion" of Metazoans, Simon Conway Morris
Convergence and Homoplasy in the Evolution of Organismal Form, Pat Willmer
Homology:The Evolution of Morphological Organization, Gerd B. Müller
Only Details Determine, Roy J. Britten
The Reactive Genome, Scott F. Gilbert
Tis
Document 1:::
Gerd B. Müller (born 1953) is an Austrian biologist who is emeritus professor at the University of Vienna where he was the head of the Department of Theoretical Biology in the Center for Organismal Systems Biology. His research interests focus on vertebrate limb development, evolutionary novelties, evo-devo theory, and the Extended Evolutionary Synthesis. He is also concerned with the development of 3D based imaging tools in developmental biology.
Biography
Müller received an M.D. in 1979 and a Ph.D. in zoology in 1985, both from the University of Vienna. He has been a sabbatical fellow at the Department of Developmental Biology, Dalhousie University, Canada, (1988) and a visiting scholar at the Museum of Comparative Zoology, Harvard University, and received his Habilitation in Anatomy and Embryology in 1989. He is a founding member of the Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria, of which he has been President since 1997. Müller is on the editorial boards of several scientific journals, including Biological Theory where he serves as an associate editor. He is editor-in-chief of the Vienna Series in Theoretical Biology, a book series devoted to theoretical developments in the biosciences, published by MIT Press.
Scientific contribution
Müller has published on developmental imaging, vertebrate limb development, the origins of phenotypic novelty, EvoDevo theory, and evolutionary theory.
With the cell and developmental biologist Stuart Newman, Müller co-edited the book Origination of Organismal Form (MIT Press, 2003). This book on evolutionary developmental biology is a collection of papers on generative mechanisms that were plausibly involved in the origination of disparate body forms during early periods of organismal life. Particular attention is given to epigenetic factors, such as physical determinants and environmental parameters, that may have led to the spontaneous emergence of bodyplans and organ forms during a
Document 2:::
In biology, homology is similarity due to shared ancestry between a pair of structures or genes in different taxa. A common example of homologous structures is the forelimbs of vertebrates, where the wings of bats and birds, the arms of primates, the front flippers of whales, and the forelegs of four-legged vertebrates like dogs and crocodiles are all derived from the same ancestral tetrapod structure. Evolutionary biology explains homologous structures adapted to different purposes as the result of descent with modification from a common ancestor. The term was first applied to biology in a non-evolutionary context by the anatomist Richard Owen in 1843. Homology was later explained by Charles Darwin's theory of evolution in 1859, but had been observed before this, from Aristotle onwards, and it was explicitly analysed by Pierre Belon in 1555.
In developmental biology, organs that developed in the embryo in the same manner and from similar origins, such as from matching primordia in successive segments of the same animal, are serially homologous. Examples include the legs of a centipede, the maxillary palp and labial palp of an insect, and the spinous processes of successive vertebrae in a vertebral column. Male and female reproductive organs are homologous if they develop from the same embryonic tissue, as do the ovaries and testicles of mammals including humans.
Sequence homology between protein or DNA sequences is similarly defined in terms of shared ancestry. Two segments of DNA can have shared ancestry because of either a speciation event (orthologs) or a duplication event (paralogs). Homology among proteins or DNA is inferred from their sequence similarity. Significant similarity is strong evidence that two sequences are related by divergent evolution from a common ancestor. Alignments of multiple sequences are used to discover the homologous regions.
Homology remains controversial in animal behaviour, but there is suggestive evidence that, for example, dom
Document 3:::
Symmetry breaking in biology is the process by which uniformity is broken, or the number of points to view invariance are reduced, to generate a more structured and improbable state. Symmetry breaking is the event where symmetry along a particular axis is lost to establish a polarity. Polarity is a measure for a biological system to distinguish poles along an axis. This measure is important because it is the first step to building complexity. For example, during organismal development, one of the first steps for the embryo is to distinguish its dorsal-ventral axis. The symmetry-breaking event that occurs here will determine which end of this axis will be the ventral side, and which end will be the dorsal side. Once this distinction is made, then all the structures that are located along this axis can develop at the proper location. As an example, during human development, the embryo needs to establish where is ‘back’ and where is ‘front’ before complex structures, such as the spine and lungs, can develop in the right location (where the lungs are placed ‘in front’ of the spine). This relationship between symmetry breaking and complexity was articulated by P.W. Anderson. He speculated that increasing levels of broken symmetry in many-body systems correlates with increasing complexity and functional specialization. In a biological perspective, the more complex an organism is, the higher number of symmetry-breaking events can be found.
The importance of symmetry breaking in biology is also reflected in the fact that it's found at all scales. Symmetry breaking can be found at the macromolecular level, at the subcellular level and even at the tissues and organ level. It's also interesting to note that most asymmetry on a higher scale is a reflection of symmetry breaking on a lower scale. Cells first need to establish a polarity through a symmetry-breaking event before tissues and organs themselves can be polar. For example, one model proposes that left-right bo
Document 4:::
Segmentation in biology is the division of some animal and plant body plans into a linear series of repetitive segments that may or may not be interconnected to each other. This article focuses on the segmentation of animal body plans, specifically using the examples of the taxa Arthropoda, Chordata, and Annelida. These three groups form segments by using a "growth zone" to direct and define the segments. While all three have a generally segmented body plan and use a growth zone, they use different mechanisms for generating this patterning. Even within these groups, different organisms have different mechanisms for segmenting the body. Segmentation of the body plan is important for allowing free movement and development of certain body parts. It also allows for regeneration in specific individuals.
Definition
Segmentation is a difficult process to satisfactorily define. Many taxa (for example the molluscs) have some form of serial repetition in their units but are not conventionally thought of as segmented. Segmented animals are those considered to have organs that were repeated, or to have a body composed of self-similar units, but usually it is the parts of an organism that are referred to as being segmented.
Embryology
Segmentation in animals typically falls into three types, characteristic of different arthropods, vertebrates, and annelids. Arthropods such as the fruit fly form segments from a field of equivalent cells based on transcription factor gradients. Vertebrates like the zebrafish use oscillating gene expression to define segments known as somites. Annelids such as the leech use smaller blast cells budded off from large teloblast cells to define segments.
Arthropods
Although Drosophila segmentation is not representative of the arthropod phylum in general, it is the most highly studied. Early screens to identify genes involved in cuticle development led to the discovery of a class of genes that was necessary for proper segmentation of the Drosophila
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of structures evolved to do the same job by unrelated organisms?
A. analogous structures
B. symmetrical structures
C. dioxide structures
D. primal structures
Answer:
|
|
sciq-10943
|
multiple_choice
|
Shortening of muscle fibers is called what?`
|
[
"contraction",
"diffusion",
"diffusion",
"shrinking"
] |
A
|
Relavent Documents:
Document 0:::
In an isotonic contraction, tension remains the same, whilst the muscle's length changes. Isotonic contractions differ from isokinetic contractions in that in isokinetic contractions the muscle speed remains constant. While superficially identical, as the muscle's force changes via the length-tension relationship during a contraction, an isotonic contraction will keep force constant while velocity changes, but an isokinetic contraction will keep velocity constant while force changes. A near isotonic contraction is known as Auxotonic contraction.
There are two types of isotonic contractions: (1) concentric and (2) eccentric. In a concentric contraction, the muscle tension rises to meet the resistance, then remains the same as the muscle shortens. In eccentric, the muscle lengthens due to the resistance being greater than the force the muscle is producing.
Concentric
This type is typical of most exercise. The external force on the muscle is less than the force the muscle is generating - a shortening contraction. The effect is not visible during the classic biceps curl, which is in fact auxotonic because the resistance (torque due to the weight being lifted) does not remain the same through the exercise. Tension is highest at a parallel to the floor level, and eases off above and below this point. Therefore, tension changes as well as muscle length.
Eccentric
There are two main features to note regarding eccentric contractions. First, the absolute tensions achieved can be very high relative to the muscle's maximum tetanic tension generating capacity (you can set down a much heavier object than you can lift). Second, the absolute tension is relatively independent of lengthening velocity.
Muscle injury and soreness are selectively associated with eccentric contraction. Muscle strengthening using exercises that involve eccentric contractions is lower than using concentric exercises. However because higher levels of tension are easier to attain during exercises th
Document 1:::
A stretch-shortening cycle (SSC) is an active stretch (eccentric contraction) of a muscle followed by an immediate shortening (concentric contraction) of that same muscle.
Research studies
The increased performance benefit associated with muscle contractions that take place during SSCs has been the focus of much research in order to determine the true nature of this enhancement. At present, there is some debate as to where and how this performance enhancement takes place. It has been postulated that elastic structures in series with the contractile component can store energy like a spring after being forcibly stretched. Since the length of the tendon increases due to the active stretch phase, if the series elastic component acts as a spring, it would therefore be storing more potential energy. This energy would be released as the tendon shortened. Thus, the recoil of the tendon during the shortening phase of the movement would result in a more efficient movement than one in which no energy had been stored. This research is further supported by Roberts et al.
However, other studies have found that removing portions of these series-elastic components (by way of tendon length reduction) had little effect on muscle performance.
Studies on turkeys have, nevertheless, shown that during SSC, a performance enhancement associated with elastic energy storage still takes place but it is thought that the aponeurosis could be a major source of energy storage (Roleveld et al., 1994).
The contractile component itself has also been associated with the ability to increase contractile performance through muscle potentiation
while other studies have found that this ability is quite limited and unable to account for such enhancements (Lensel and Goubel, 1987, Lensel-Corbeil and Goubel, 1990; Ettema and Huijing, 1989).
Community agreement
The results of these often contradictory studies have been associated with improved efficiencies for human or animal movements such as counter
Document 2:::
Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance.
See also
Myotomy
Oral myology
Document 3:::
In biomechanics, Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill, who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model.
AV Hill's force-velocity equation for tetanized muscle
This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction. It relates tension to velocity with regard to the internal thermodynamics. The equation is
where
is the tension (or load) in the muscle
is the velocity of contraction
is the maximum isometric tension (or load) generated in the muscle
coefficient of shortening heat
is the maximum velocity, when
Although Hill's equation looks very much like the van der Waals equation, the former has units of energy dissipation, while the latter has units of energy. Hill's equation demonstrates that the relationship between F and v is hyperbolic. Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting
Document 4:::
Muscle contraction is the activation of tension-generating sites within muscle cells. In physiology, muscle contraction does not necessarily mean muscle shortening because muscle tension can be produced without changes in muscle length, such as when holding something heavy in the same position. The termination of muscle contraction is followed by muscle relaxation, which is a return of the muscle fibers to their low tension-generating state.
For the contractions to happen, the muscle cells must rely on the interaction of two types of filaments: thin and thick filaments.
The major constituent of thin filaments is a chain formed by helical coiling of two strands of actin, and thick filaments dominantly consist of chains of the motor-protein myosin. Together, these two filaments form myofibrils - the basic functional organelles in the skeletal muscle system.
In vertebrates, skeletal muscle contractions are neurogenic as they require synaptic input from motor neurons. A single motor neuron is able to innervate multiple muscle fibers, thereby causing the fibers to contract at the same time. Once innervated, the protein filaments within each skeletal muscle fiber slide past each other to produce a contraction, which is explained by the sliding filament theory. The contraction produced can be described as a twitch, summation, or tetanus, depending on the frequency of action potentials. In skeletal muscles, muscle tension is at its greatest when the muscle is stretched to an intermediate length as described by the length-tension relationship.
Unlike skeletal muscle, the contractions of smooth and cardiac muscles are myogenic (meaning that they are initiated by the smooth or heart muscle cells themselves instead of being stimulated by an outside event such as nerve stimulation), although they can be modulated by stimuli from the autonomic nervous system. The mechanisms of contraction in these muscle tissues are similar to those in skeletal muscle tissues.
Muscle contra
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Shortening of muscle fibers is called what?`
A. contraction
B. diffusion
C. diffusion
D. shrinking
Answer:
|
|
sciq-3182
|
multiple_choice
|
What type of analysis is performed to study gene expression patterns in cells?
|
[
"dna analysis",
"rna analysis",
"proteins analysis",
"residues analysis"
] |
B
|
Relavent Documents:
Document 0:::
In the field of molecular biology, gene expression profiling is the measurement of the activity (the expression) of thousands of genes at once, to create a global picture of cellular function. These profiles can, for example, distinguish between cells that are actively dividing, or show how the cells react to a particular treatment. Many experiments of this sort measure an entire genome simultaneously, that is, every gene present in a particular cell.
Several transcriptomics technologies can be used to generate the necessary data to analyse. DNA microarrays measure the relative activity of previously identified target genes. Sequence based techniques, like RNA-Seq, provide information on the sequences of genes in addition to their expression level.
Background
Expression profiling is a logical next step after sequencing a genome: the sequence tells us what the cell could possibly do, while the expression profile tells us what it is actually doing at a point in time. Genes contain the instructions for making messenger RNA (mRNA), but at any moment each cell makes mRNA from only a fraction of the genes it carries. If a gene is used to produce mRNA, it is considered "on", otherwise "off". Many factors determine whether a gene is on or off, such as the time of day, whether or not the cell is actively dividing, its local environment, and chemical signals from other cells. For instance, skin cells, liver cells and nerve cells turn on (express) somewhat different genes and that is in large part what makes them different. Therefore, an expression profile allows one to deduce a cell's type, state, environment, and so forth.
Expression profiling experiments often involve measuring the relative amount of mRNA expressed in two or more experimental conditions. This is because altered levels of a specific sequence of mRNA suggest a changed need for the protein coded by the mRNA, perhaps indicating a homeostatic response or a pathological condition. For example, higher leve
Document 1:::
Stem cell proteomics is an omics that analyzes the proteomes of stem cells. Comparing different stem cell proteomes can reveal proteins that are important for stem cell differentiation.
See also
Stem cell genomics
Stem cells
Proteomics
Document 2:::
Cellular deconvolution (also referred to as cell type composition or cell proportion estimation) refers to computational techniques aiming at estimating the proportions of different cell types in samples collected from a tissue. For example, samples collected from the human brain are a mixture of various neuronal and glial cell types (e.g. microglia and astrocytes) in different proportions, where each cell type has a diverse gene expression profile. Since most high-throughput technologies use bulk samples and measure the aggregated levels of molecular information (e.g. expression levels of genes) for all cells in a sample, the measured values would be an aggregate of the values pertaining to the expression landscape of different cell types. Therefore, many downstream analyses such as differential gene expression might be confounded by the variations in cell type proportions when using the output of high-throughput technologies applied to bulk samples. The development of statistical methods to identify cell type proportions in large-scale bulk samples is an important step for better understanding of the relationship between cell type composition and diseases.
Cellular deconvolution algorithms have been applied to a variety of samples collected from saliva, buccal, cervical, PBMC, brain, kidney, and pancreatic cells, and many studies have shown that estimating and incorporating the proportions of cell types into various analyses improves the interpretability of high-throughput omics data and reduces the confounding effects of cellular heterogeneity, also known as tissue heterogeneity, in functional analysis of omics data.
Mathematical Formulation
Most cellular deconvolution algorithms consider an input data in a form of a matrix , which represents some molecular information (e.g. gene expression data or DNA methylation data) measured over a group of samples and marks (e.g. genes or CpG sites). The goal of the algorithm is to use these data and return an output
Document 3:::
Microarray analysis techniques are used in interpreting the data generated from experiments on DNA (Gene chip analysis), RNA, and protein microarrays, which allow researchers to investigate the expression state of a large number of genes - in many cases, an organism's entire genome - in a single experiment. Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult - if not impossible - to analyze without the help of computer programs.
Introduction
Microarray data analysis is the final step in reading and processing data produced by a microarray chip. Samples undergo various processes including purification and scanning using the microchip, which then produces a large amount of data that requires processing via computer software. It involves several distinct steps, as outlined in the image below. Changing any one of the steps will change the outcome of the analysis, so the MAQC Project was created to identify a set of standard strategies. Companies exist that use the MAQC protocols to perform a complete analysis.
Techniques
Most microarray manufacturers, such as Affymetrix and Agilent, provide commercial data analysis software alongside their microarray products. There are also open source options that utilize a variety of methods for analyzing microarray data.
Aggregation and normalization
Comparing two different arrays or two different samples hybridized to the same array generally involves making adjustments for systematic errors introduced by differences in procedures and dye intensity effects. Dye normalization for two color arrays is often achieved by local regression. LIMMA provides a set of tools for background correction and scaling, as well as an option to average on-slide duplicate spots. A common method for evaluating how well normalized an array is, is to plot an MA plot of the data. MA plots can be produced using programs and language
Document 4:::
CellCognition is a free open-source computational framework for quantitative analysis of high-throughput fluorescence microscopy (time-lapse) images in the field of bioimage informatics and systems microscopy. The CellCognition framework uses image processing, computer vision and machine learning techniques for single-cell tracking and classification of cell morphologies. This enables measurements of temporal progression of cell phases, modeling of cellular dynamics and generation of phenotype map.
Features
CellCognition uses a computational pipeline which includes image segmentation, object detection, feature extraction, statistical classification, tracking of individual cells over time, detection of class-transition motifs (e.g. cells entering mitosis), and HMM correction of classification errors on class labels.
The software is written in Python 2.7 and binaries are available for Windows and Mac OS X.
History
CellCognition (Version 1.0.1) was first released in December 2009 by scientists from the Gerlich Lab and the Buhmann group at the Swiss Federal Institute of Technology Zürich and the Ellenberg Lab at the European Molecular Biology Laboratory Heidelberg. The latest release is 1.6.1 and the software is developed and maintained by the Gerlich Lab at the Institute of Molecular Biotechnology.
Application
CellCognition has been used in RNAi-based screening, applied in basic cell cycle study, and extended to unsupervised modeling.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of analysis is performed to study gene expression patterns in cells?
A. dna analysis
B. rna analysis
C. proteins analysis
D. residues analysis
Answer:
|
|
sciq-3136
|
multiple_choice
|
Abnormal electrical activity in the brain is the cause of what disease associated with seizures?
|
[
"Alzheimer's",
"malaria",
"anemia",
"epilepsy"
] |
D
|
Relavent Documents:
Document 0:::
Generally, seizures are observed in patients who do not have epilepsy. There are many causes of seizures. Organ failure, medication and medication withdrawal, cancer, imbalance of electrolytes, hypertensive encephalopathy, may be some of its potential causes. The factors that lead to a seizure are often complex and it may not be possible to determine what causes a particular seizure, what causes it to happen at a particular time, or how often seizures occur.
Diet
Malnutrition and overnutrition may increase the risk of seizures. Examples include the following:
Vitamin B1 deficiency (thiamine deficiency) was reported to cause seizures, especially in alcoholics.
Vitamin B6 depletion (pyridoxine deficiency) was reported to be associated with pyridoxine-dependent seizures.
Vitamin B12 deficiency was reported to be the cause of seizures for adults and for infants.
Folic acid in large amounts was considered to potentially counteract the antiseizure effects of antiepileptic drugs and increase the seizure frequency in some children, although that concern is no longer held by epileptologists.
Medical conditions
Those with various medical conditions may experience seizures as one of their symptoms. These include:
Other conditions have been associated with lower seizure thresholds and/or increased likelihood of seizure comorbidity (but not necessarily with seizure induction). Examples include depression, psychosis, obsessive-compulsive disorder (OCD), attention deficit hyperactivity disorder (ADHD), and autism, among many others.
Drugs
Adverse effect
Seizures may occur as an adverse effect of certain drugs. These include:
Use of certain recreational drugs may lead to seizures in some, especially when used in high doses or for extended periods. These include amphetamines (such as amphetamine, methamphetamine, MDMA ("ecstasy"), and mephedrone), cocaine, methylphenidate, psilocybin, psilocin, and GHB.
If treated with the wrong kind of antiepileptic drugs (AED), seizures
Document 1:::
To classify postoperative outcomes for epilepsy surgery, Jerome Engel proposed the following scheme, the Engel Epilepsy Surgery Outcome Scale, which has become the de facto standard when reporting results in the medical literature:
Class I: Free of disabling seizures
Class II: Rare disabling seizures ("almost seizure-free")
Class III: Worthwhile improvement
Class IV: No worthwhile improvement
History
Surgery for epilepsy patients has been used for over a century, but due to technological restrictions and insufficient knowledge of brain surgery, this treatment approach was relatively rare until the 1980s and 90s. Prior to the 1980s, no classification system existed due to the lack of operations performed up until the time. As surgery as a treatment grew more prevalent, a classification system became a necessity. The appropriate evaluation of patients following epilepsy surgery is extremely important, as medical professionals must know the appropriate course of action to follow in order to achieve seizure freedom for patients. Accordingly, the Engel classification guidelines were devised by UCLA neurologist Jerome Engel Jr. in 1987 and made public at the 1992 Palm Desert Conference on Epilepsy Surgery. The Engel classification system has since become the standard in reporting postoperative outcomes of epilepsy surgery.
Overview
In Engel's 1993 summary of the 1992 Palm Desert Conference on Epilepsy Surgery, he annotated his classification system with more detail. The annotation was as follows:
Class I: Seizure free or no more than a few early, nondisabling seizures; or seizures upon drug withdrawal only
Class II: Disabling seizures occur rarely during a period of at least 2 years; disabling seizures may have been more frequent soon after surgery; nocturnal seizures
Class III: Worthwhile improvement; seizure reduction for prolonged periods but less than 2 years
Class IV: No worthwhile improvement; some reduction, no reduction, or worsening are possible
A
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
A convulsion is a medical condition where the body muscles contract and relax rapidly and repeatedly, resulting in uncontrolled shaking. Because epileptic seizures typically include convulsions, the term convulsion is often used as a synonym for seizure. However, not all epileptic seizures result in convulsions, and not all convulsions are caused by epileptic seizures. Non-epileptic convulsions have no relation with epilepsy, and are caused by non-epileptic seizures.
Convulsions can be caused by epilepsy, infections (including a severe form of listeriosis which is caused by eating food contaminated by Listeria Monocytogenes), brain trauma, or other medical conditions. They can also occur from an electric shock or improperly enriched air for scuba diving.
The word fit is sometimes used to mean a convulsion or epileptic seizure.
Signs and symptoms
A person having a convulsion may experience several different symptoms, such as a brief blackout, confusion, drooling, loss of bowel or bladder control, sudden shaking of the entire body, uncontrollable muscle spasms, or temporary cessation of breathing. Symptoms usually last from a few seconds to several minutes, although they can last longer.
Convulsions in children are not necessarily benign, and may lead to brain damage if prolonged. In these patients, the frequency of occurrence should not downplay their significance, as a worsening seizure state may reflect the damage caused by successive attacks. Symptoms may include:
Lack of awareness
Loss of consciousness
Eyes rolling back
Changes to breathing
Stiffening of the arms, legs, or whole body
Jerky movements of the arms, legs, body, or head
Lack of control over movements
Inability to respond
Causes
Most convulsions are the result of abnormal electrical activity in the brain. Often, a specific cause is not clear. Numerous conditions can cause a convulsion.
Convulsions can be caused by specific chemicals in the blood, as well as infections like meningitis or encepha
Document 4:::
Drug-resistant epilepsy (DRE), also known as refractory epilepsy, intractable epilepsy, or pharmacoresistant epilepsy, is diagnosed following a failure of adequate trials of two tolerated and appropriately chosen and used antiepileptic drugs (AEDs) (whether as monotherapies or in combination) to achieve sustained seizure freedom. The probability that the next medication will achieve seizure freedom drops with every failed AED. For example, after two failed AEDs, the probability that the third will achieve seizure freedom is around 4%. Drug-resistant epilepsy is commonly diagnosed after several years of uncontrolled seizures, however, in most cases, it is evident much earlier. Approximately 30% of people with epilepsy have a drug-resistant form.
When 2 AED regimens have failed to produce sustained seizure-freedom, it is important to initiate other treatments to control seizures. Next to indirect consequences like injuries from falls, accidents, drowning and impairment in daily life, seizure control is critical because uncontrolled seizures -specifically generalized tonic clonic seizures- can damage the brain and increase the risk for sudden unexpected death in epilepsy called SUDEP. The first step is for physicians to refer their DRE patients to an epilepsy center.
Diagnostic evaluation
Prolonged EEG/Continuous video EEG/ Epilepsy Monitoring Unit monitoring
One of the first steps in management of drug resistant epilepsy is confirming the diagnosis by EEG. Typically patients are admitted to hospital for prolonged EEG monitoring. Typically patients are taken off their antiseizure medications so that the evolution of seizure symptoms and there relation with changes in electrical activity of brain can be determined; while minimizing adverse consequences of seizures as far as possible. Additional maneuvers to provoke seizures are also frequently performed, like sleep deprivation, photic stimulation, hyperventilation. This study can take 3–14 days. Length of study dep
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Abnormal electrical activity in the brain is the cause of what disease associated with seizures?
A. Alzheimer's
B. malaria
C. anemia
D. epilepsy
Answer:
|
|
sciq-274
|
multiple_choice
|
Electrons in covalent compounds are shared between the two atoms, unlike the case in what type of bonds?
|
[
"weak bonds",
"horizontal bonds",
"ionic bonds",
"soluble bonds"
] |
C
|
Relavent Documents:
Document 0:::
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
Document 1:::
A bonding electron is an electron involved in chemical bonding. This can refer to:
Chemical bond, a lasting attraction between atoms, ions or molecules
Covalent bond or molecular bond, a sharing of electron pairs between atoms
Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule
Chemical bonding
Document 2:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 3:::
In chemistry, a single bond is a chemical bond between two atoms involving two valence electrons. That is, the atoms share one pair of electrons where the bond forms. Therefore, a single bond is a type of covalent bond. When shared, each of the two electrons involved is no longer in the sole possession of the orbital in which it originated. Rather, both of the two electrons spend time in either of the orbitals which overlap in the bonding process. As a Lewis structure, a single bond is denoted as AːA or A-A, for which A represents an element. In the first rendition, each dot represents a shared electron, and in the second rendition, the bar represents both of the electrons shared in the single bond.
A covalent bond can also be a double bond or a triple bond. A single bond is weaker than either a double bond or a triple bond. This difference in strength can be explained by examining the component bonds of which each of these types of covalent bonds consists (Moore, Stanitski, and Jurs 393).
Usually, a single bond is a sigma bond. An exception is the bond in diboron, which is a pi bond. In contrast, the double bond consists of one sigma bond and one pi bond, and a triple bond consists of one sigma bond and two pi bonds (Moore, Stanitski, and Jurs 396). The number of component bonds is what determines the strength disparity. It stands to reason that the single bond is the weakest of the three because it consists of only a sigma bond, and the double bond or triple bond consist not only of this type of component bond but also at least one additional bond.
The single bond has the capacity for rotation, a property not possessed by the double bond or the triple bond. The structure of pi bonds does not allow for rotation (at least not at 298 K), so the double bond and the triple bond which contain pi bonds are held due to this property. The sigma bond is not so restrictive, and the single bond is able to rotate using the sigma bond as the axis of rotation (Moore, Stanits
Document 4:::
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding.
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds.
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks.
Types
Molecular binding can be classified into the following types:
Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible
Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs
Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place.
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Electrons in covalent compounds are shared between the two atoms, unlike the case in what type of bonds?
A. weak bonds
B. horizontal bonds
C. ionic bonds
D. soluble bonds
Answer:
|
|
sciq-7518
|
multiple_choice
|
When it was found that, without the force of gravity exerting pressure on the bones, bone mass was lost in astronauts, what kind of exercise provided an antidote?
|
[
"sedentary",
"anaerobic",
"aerobic",
"resistive"
] |
D
|
Relavent Documents:
Document 0:::
The following outline is provided as an overview of and topical guide to physiology:
Physiology – scientific study of the normal function in living systems. A branch of biology, its focus is in how organisms, organ systems, organs, cells, and biomolecules carry out the chemical or physical functions that exist in a living system.
What type of thing is physiology?
Physiology can be described as all of the following:
An academic discipline
A branch of science
A branch of biology
Branches of physiology
By approach
Applied physiology
Clinical physiology
Exercise physiology
Nutrition physiology
Comparative physiology
Mathematical physiology
Yoga physiology
By organism
Animal physiology
Mammal physiology
Human physiology
Fish physiology
Insect physiology
Plant physiology
By process
Developmental physiology
Ecophysiology
Evolutionary physiology
By subsystem
Cardiovascular physiology
Renal physiology
Defense physiology
Gastrointestinal physiology
Musculoskeletal physiology
Neurophysiology
Respiratory physiology
History of physiology
History of physiology
General physiology concepts
Physiology organizations
American Physiological Society
International Union of Physiological Sciences
Physiology publications
American Journal of Physiology
Experimental Physiology
Journal of Applied Physiology
Persons influential in physiology
List of Nobel laureates in Physiology or Medicine
List of physiologists
See also
Outline of biology
Document 1:::
Kinesiogenomics refers to the study of genetics in the various disciplines of the field of kinesiology, the study of human movement. The field has also been referred to as "exercise genomics" or "exercisenomics." Areas of study within kinesiogenomics include the role of gene sequence variation (i.e., alleles) in sport performance, identification of genes (and their different alleles) that contribute to the response and adaptation of the body's tissue systems (e.g., muscles, heart, metabolism, etc.) to various exercise-related stimuli, the use of genetic testing to predict sport performance or individualize exercise prescription, and gene doping, the potential for genetic therapy to be used to enhance sport performance.
The field of kinesiogenomics is relatively new, though two books have outlined basic concepts. A regularly published review article entitled, "The human gene map for performance and health-related fitness phenotypes," describes the genes that have been studied in relation to specific exercise- and fitness-related traits. The most recent (seventh) update was published in 2009.
Research
Within the field of kinesiogenomics, several research studies have been conducted in recent years. This increase in research has led to advancements of knowledge in associating how genes and gene sequencing effects a person's exercise habits and health. One study focusing on twins looked to see the effect of genes on exercise ability, the effects of exercise on mood, and the ability to lose weight. The research concluded that genetics had a significant impact of the likelihood an individual would participate in exercise. An increase in participation can be linked to personality factors such as self-motivation and self-discipline, while a lower participation in exercise can be influenced by factors such as anxiety and depression. These personality trait, both positive and negative, can be associated to one's genetic makeup.
Document 2:::
Asker Jeukendrup is a sports nutrition scientist and an Ironman triathlete.
Academic career
Following an MSc in Human Movement Sciences at Maastricht University in the Netherlands he completed his PhD in 1997 at the same university studying aspects of carbohydrate and fat metabolism during exercise. After postdoctoral research at the University of Texas in Austin, Jeukendrup
became the youngest professor at the University of Birmingham at the age of 35. He was a Professor of Exercise Metabolism at the University of Birmingham for over 12 years. He also served as a Director of the Human Performance Lab at the same university. Jeukendrup has authored several books on sports nutrition and over 200 peer reviewed journal articles on exercise and sports nutrition.
His research interests include metabolic responses to exercise, regulation of carbohydrate and fat metabolism, sports nutrition, gastrointestinal complaints during exercise, training and over-training. He is a Fellow of the American College of Sports Medicine, a member of the New York Academy of Sciences, the Nutrition Society, the Physiological Society, the Biochemical Society, the American Diabetes Association and the European College of Sport Sciences.
Post-academic Career
In June 2011, Jeukendrup was named Global Senior Director of the Gatorade Sports Science Institute (GSSI) at PepsiCo. In addition to leading GSSI, he remained an adjunct professor at the University of Birmingham. In 2014 Asker started his own consulting company mysportscience ltd and now advises mostly teams and organisations. Asker is currently Head nutrition for the Dutch Olympic Committee (Performance Manager Nutrition TeamNL), JumboVisma Pro cycling, PSV Eindhoven, Red Bull Salzburg and the Red Bull Athlete Performance Center. Through blogs on Mysportscience.com he tries to bust myths in sports nutrition and provide evidence based and balanced information about nutrition. Jeukendrup is also co-founder of CORE Nutrition planning, a
Document 3:::
The neurobiological effects of physical exercise are numerous and involve a wide range of interrelated effects on brain structure, brain function, and cognition. A large body of research in humans has demonstrated that consistent aerobic exercise (e.g., 30 minutes every day) induces persistent improvements in certain cognitive functions, healthy alterations in gene expression in the brain, and beneficial forms of neuroplasticity and behavioral plasticity; some of these long-term effects include: increased neuron growth, increased neurological activity (e.g., and BDNF signaling), improved stress coping, enhanced cognitive control of behavior, improved declarative, spatial, and working memory, and structural and functional improvements in brain structures and pathways associated with cognitive control and memory. The effects of exercise on cognition have important implications for improving academic performance in children and college students, improving adult productivity, preserving cognitive function in old age, preventing or treating certain neurological disorders, and improving overall quality of life.
In healthy adults, aerobic exercise has been shown to induce transient effects on cognition after a single exercise session and persistent effects on cognition following regular exercise over the course of several months. People who regularly perform an aerobic exercise (e.g., running, jogging, brisk walking, swimming, and cycling) have greater scores on neuropsychological function and performance tests that measure certain cognitive functions, such as attentional control, inhibitory control, cognitive flexibility, working memory updating and capacity, declarative memory, spatial memory, and information processing speed. The transient effects of exercise on cognition include improvements in most executive functions (e.g., attention, working memory, cognitive flexibility, inhibitory control, problem solving, and decision making) and information processing speed fo
Document 4:::
Exercise physiology is the physiology of physical exercise. It is one of the allied health professions, and involves the study of the acute responses and chronic adaptations to exercise. Exercise physiologists are the highest qualified exercise professionals and utilise education, lifestyle intervention and specific forms of exercise to rehabilitate and manage acute and chronic injuries and conditions.
Understanding the effect of exercise involves studying specific changes in muscular, cardiovascular, and neurohumoral systems that lead to changes in functional capacity and strength due to endurance training or strength training. The effect of training on the body has been defined as the reaction to the adaptive responses of the body arising from exercise or as "an elevation of metabolism produced by exercise".
Exercise physiologists study the effect of exercise on pathology, and the mechanisms by which exercise can reduce or reverse disease progression.
History
British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Notable contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre among others.
In some countries it is a Primary Health Care Provider. Accredited Exercise Physiologists (AEP's) are university-trained professionals who prescribe exercise-based interventions to treat various conditions using dose response prescriptions specific to each individual.
Energy expenditure
Humans have a high capacity to expend energy for many hours during sustained exertion. For example, one i
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When it was found that, without the force of gravity exerting pressure on the bones, bone mass was lost in astronauts, what kind of exercise provided an antidote?
A. sedentary
B. anaerobic
C. aerobic
D. resistive
Answer:
|
|
sciq-5510
|
multiple_choice
|
What term describes the sequential appearance and disappearance of species in a community over time after a severe disturbance?
|
[
"succession",
"generational replacement",
"pattern",
"supression"
] |
A
|
Relavent Documents:
Document 0:::
A natural phenomenon is an observable event which is not man-made. Examples include: sunrise, weather, fog, thunder, tornadoes; biological processes, decomposition, germination; physical processes, wave propagation, erosion; tidal flow, and natural disasters such as electromagnetic pulses, volcanic eruptions, hurricanes and earthquakes.
History
Over many intervals of time, natural phenomena have been observed by a series of countless events as a feature created by nature.
Physical phenomena
The act of:
Freezing
Boiling
Gravity
Magnetism
Gallery
Chemical phenomena
Oxidation
Fire
Rusting
Biological phenomena
Metabolism
Catabolism
Anabolism
Decomposition – by which organic substances are broken down into a much simpler form of matter
Fermentation – converts sugar to acids, gases and/or alcohol.
Growth
Birth
Death
Population decrease
Gallery
Astronomical phenomena
Supernova
Gamma ray bursts
Quasars
Blazars
Pulsars
Cosmic microwave background radiation.
Geological phenomena
Mineralogic phenomena
Lithologic phenomena
Rock types
Igneous rock
Igneous formation processes
Sedimentary rock
Sedimentary formation processes (sedimentation)
Quicksand
Metamorphic rock
Endogenic phenomena
Plate tectonics
Continental drift
Earthquake
Oceanic trench
Phenomena associated with igneous activity
Geysers and hot springs
Bradyseism
Volcanic eruption
Earth's magnetic field
Exogenic phenomena
Slope phenomena
Slump
Landslide
Weathering phenomena
Erosion
Glacial and peri-glacial phenomena
Glaciation
Moraines
Hanging valleys
Atmospheric phenomena
Impact phenomena
Impact crater
Coupled endogenic-exogenic phenomena
Orogeny
Drainage development
Stream capture
Gallery
Meteorological phenomena
Violent meteorological phenomena are called storms. Regular, cyclical phenomena include seasons and atmospheric circulation. climate change is often semi-regular.
Atmospheric optical phenomena
Oceanographic
Oceanographic phenomena inc
Document 1:::
Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons.
Overview
Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it.
Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years.
Measurement
Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct.
Lifespan estimates
Some species lifespan es
Document 2:::
Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree.
Evolutionary trends
Differences between plant and animal physiology and reproduction cause minor differences in how they evolve.
One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life.
The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win
Document 3:::
This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology, Glossary of evolutionary biology, and Glossary of environmental science.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Outline of ecology
History of ecology
Document 4:::
Extinction is the termination of a taxon by the death of its last member. A taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence.
More than 99% of all species that ever lived on Earth, amounting to over five billion species, are estimated to have died out. It is estimated that there are currently around 8.7 million species of eukaryote globally, and possibly many times more if microorganisms, like bacteria, are included. Notable extinct animal species include non-avian dinosaurs, saber-toothed cats, dodos, mammoths, ground sloths, thylacines, trilobites, and golden toads.
Through evolution, species arise through the process of speciation—where new varieties of organisms arise and thrive when they are able to find and exploit an ecological niche—and species become extinct when they are no longer able to survive in changing conditions or against superior competition. The relationship between animals and their ecological niches has been firmly established. A typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years.
Mass extinctions are relatively rare events; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. Only recently have extinctions been recorded and scientists have become alarmed at the current high rate of extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing plant and animal
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term describes the sequential appearance and disappearance of species in a community over time after a severe disturbance?
A. succession
B. generational replacement
C. pattern
D. supression
Answer:
|
|
sciq-4056
|
multiple_choice
|
What do scientists use to help explain objects or systems in simpler ways?
|
[
"models",
"theories",
"measurements",
"plants"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 3:::
The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper.
Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences.
Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island).
See also
Document 4:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do scientists use to help explain objects or systems in simpler ways?
A. models
B. theories
C. measurements
D. plants
Answer:
|
|
sciq-6969
|
multiple_choice
|
What causes light to refract?
|
[
"bending",
"blinking",
"speeding up",
"slowing down"
] |
D
|
Relavent Documents:
Document 0:::
Treatise on Light: In Which Are Explained the Causes of That Which Occurs in Reflection & Refraction (: Où Sont Expliquées les Causes de ce qui Luy Arrive Dans la Reflexion & Dans la Refraction) is a book written by Dutch polymath Christiaan Huygens that was published in French in 1690. The book describes Huygens's conception of the nature of light propagation which makes it possible to explain the laws of geometrical optics shown in Descartes's Dioptrique, which Huygens aimed to replace.
Unlike Newton's corpuscular theory, which was presented in the Opticks, Huygens conceived of light as an irregular series of shock waves which proceeds with very great, but finite, velocity through the aether, similar to sound waves. Moreover, he proposed that each point of a wavefront is itself the origin of a secondary spherical wave, a principle known today as the Huygens–Fresnel principle. The book is considered a pioneering work of theoretical and mathematical physics and the first mechanistic account of an unobservable physical phenomenon.
Overview
Huygens worked on the mathematics of light rays and the properties of refraction in his work Dioptrica, which began in 1652 but remained unpublished, and which predated his lens grinding work. In 1672, the problem of the strange refraction of the Iceland crystal created a puzzle regarding the physics of refraction that Huygens wanted to solve. Huygens eventually was able to solve this problem by means of elliptical waves in 1677 and confirmed his theory by experiments mostly after critical reactions in 1679.
His explanation of birefringence was based on three hypotheses: (1) There are inside the crystal two media in which light waves proceed, (2) one medium behaves as ordinary ether and carries the normally refracted ray, and (3) the velocity of the waves in the other medium is dependent on direction, so that the waves do not expand in spherical form, but rather as ellipsoids of revolution; this second medium carries the abnorm
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Rectilinear propagation describes the tendency of electromagnetic waves (light) to travel in a straight line. Light does not deviate when travelling through a homogeneous medium, which has the same refractive index throughout; otherwise, light suffers refraction.
Even though a wave front may be bent, (e.g. the waves created by a rock hitting a pond) the individual rays are moving in straight lines. Rectilinear propagation was discovered by Pierre de Fermat
Proof
Take three cardboard A, B and C, of the same size. Make a pin hole at the centre of each of three cardboard. Place the cardboard in the upright position, such that the holes in A, B and C are in the same straight line, in the order. Place a luminous source like a candle near the cardboard A and look through the hole in the cardboard C. We can see the candle flame. This implies that light rays travel along a straight line ABC, and hence, candle flame is visible. When one of the cardboard is slightly displaced, candle light would not be visible. It means that the light emitted by the candle is unable to bend and reach observers eye. This proves that light travels along a straight path. This proves the rectilinear propagation of light.
See also
Diffraction
Plane wave
Document 4:::
Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
W
Z
See also
:Category:Optical components
:Category:Optical materials
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What causes light to refract?
A. bending
B. blinking
C. speeding up
D. slowing down
Answer:
|
|
sciq-6984
|
multiple_choice
|
What is the wheeled robot developed by nasa to explore the surface of mars?
|
[
"mars robot",
"mars rover",
"mars driver",
"mars SUV"
] |
B
|
Relavent Documents:
Document 0:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 3:::
Vandana "Vandi" Verma is a space roboticist and chief engineer at NASA's Jet Propulsion Laboratory, known for driving the Mars rovers, notably Curiosity and Perseverance, using software including PLEXIL programming technology that she co-wrote and developed.
Biography
Verma was born and grew up partly in Halwara, India; her father was a pilot in the Indian Air Force. She gained her first qualification, a bachelor's degree in electrical engineering, at Punjab Engineering College in Chandigarh, India. She went on to gain a master's in robotics from Carnegie Mellon University (CMU)
followed by a PhD in robotics from Carnegie Mellon in 2005, with a thesis entitled Tractable Particle Filters for Robot Fault Diagnosis.
At CMU, she developed in interest in robotics in unknown environments. She was involved in a 3-year astrobiology experimental station in the Atacama desert. The desert was chosen because of the similarities between its hostile environment and the surface of Mars. She won a competition to create a robot to navigate a maze and collect balloons. She tested robotic technologies in the Arctic and Antarctic.
Between studies, she gained her pilot's license.
Her first post-graduate job was at Ames Research Center as a research scientist.
In 2006, Verma co-wrote PLEXIL, an open source programming language now used in automation technologies such as the NASA K10 rover, Mars Curiosity rover's percussion drill, the International Space Station, the Deep Space Habitat and Habitat Demonstration Unit, the Edison Demonstration of Smallsat Networks, LADEE, and Autonomy Operating System (AOS).
In 2007 Verma joined NASA's Jet Propulsion Laboratory (JPL) with a special interest in robotics and flight software and became part of the Mars rover team in 2008. As of 2019, she leads JPL's Autonomous Systems, Mobility and Robotic Systems group.
Verma has written academic papers in her field on subjects such as the AEGIS (Autonomous Exploration for Gathering Increased Science)
Document 4:::
Mission specialist (MS) was a specific position held by certain NASA astronauts who were tasked with conducting a range of scientific, medical, or engineering experiments during a spaceflight mission. These specialists were usually assigned to a specific field of expertise that was related to the goals of the particular mission they were assigned to.
Mission specialists were highly trained individuals who underwent extensive training in preparation for their missions. They were required to have a broad range of skills, including knowledge of science and engineering, as well as experience in operating complex equipment in a zero-gravity environment.
During a mission, mission specialists were responsible for conducting experiments, operating equipment, and performing spacewalks to repair or maintain equipment outside the spacecraft. They also played a critical role in ensuring the safety of the crew by monitoring the spacecraft's systems and responding to emergencies as needed.
The role of mission specialist was an important one in the Space Shuttle program, as they were instrumental in the success of the program's many scientific and engineering missions. Many of the advances in science and technology that were made during this period were made possible by the hard work and dedication of the mission specialists who worked tirelessly to push the boundaries of what was possible in space.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the wheeled robot developed by nasa to explore the surface of mars?
A. mars robot
B. mars rover
C. mars driver
D. mars SUV
Answer:
|
|
sciq-594
|
multiple_choice
|
The complete hydrolysis of starch yields what?
|
[
"glutamate",
"insulin",
"glucose",
"sucrose"
] |
C
|
Relavent Documents:
Document 0:::
Modified starch, also called starch derivatives, are prepared by physically, enzymatically, or chemically treating native starch to change its properties. Modified starches are used in practically all starch applications, such as in food products as a thickening agent, stabilizer or emulsifier; in pharmaceuticals as a disintegrant; or as binder in coated paper. They are also used in many other applications.
Starches are modified to enhance their performance in different applications. Starches may be modified to increase their stability against excessive heat, acid, shear, time, cooling, or freezing; to change their texture; to decrease or increase their viscosity; to lengthen or shorten gelatinization time; or to increase their visco-stability.
Modification methods
Acid-treated starch (INS 1401), also called thin boiling starch, is prepared by treating starch or starch granules with inorganic acids, e.g. hydrochloric acid breaking down the starch molecule and thus reducing the viscosity.
Other treatments producing modified starch (with different INS and E-numbers) are:
dextrin (INS 1400), roasted starch with hydrochloric acid
alkaline-modified starch (INS 1402) with sodium hydroxide or potassium hydroxide
bleached starch (INS 1403) with hydrogen peroxide
oxidized starch (INS 1404, E1404) with sodium hypochlorite, breaking down viscosity
enzyme-treated starch (INS 1405), maltodextrin, cyclodextrin
monostarch phosphate (INS 1410, E1410) with phosphorous acid or the salts sodium phosphate, potassium phosphate, or sodium triphosphate to reduce retrogradation
distarch phosphate (INS 1412, E1412) by esterification with for example sodium trimetaphosphate, crosslinked starch modifying the rheology, the texture
acetylated starch (INS 1420, E1420) esterification with acetic anhydride
hydroxypropylated starch (INS 1440, E1440), starch ether, with propylene oxide, increasing viscosity stability
hydroxyethyl starch, with ethylene oxide
starch sodium octenyl su
Document 1:::
Starch production is an isolation of starch from plant sources. It takes place in starch plants. Starch industry is a part of food processing which is using starch as a starting material for production of starch derivatives, hydrolysates, dextrins.
At first, the raw material for the preparation of the starch was wheat. Currently main starch sources are:
maize (in America, China and Europe) – 70%,
potatoes (in Europe) – 12%,
wheat - 8% (in Europe and Australia),
tapioca - 9% (South East Asia and South America),
rice, sorghum and other - 1%.
Potato starch production
The production of potato starch comprises the steps such as delivery and unloading potatoes, cleaning, rasping of tubers, potato juice separation, starch extraction, starch milk refination, dewatering of refined starch milk and starch drying.
The potato starch production supply chain varies significantly by region. For example, potato starch in Europe is produced from potatoes grown specifically for this purpose. However, in the US, potatoes are not grown for starch production and manufacturers must source raw material from food processor waste streams. The characteristics of these waste streams can vary significantly and require further processing by the US potato starch manufacturer to ensure the end-product functionality and specifications are acceptable.
Delivery and unloading potatoes
Potatoes are delivered to the starch plants via road or rail transport. Unloading of potatoes could be done in two ways:
dry - using elevators and tippers,
wet - using strong jet of water.
Cleaning
Coarsely cleaning of potatoes takes place during the transport of potatoes to the scrubber by channel. In addition, before the scrubber, straw and stones separators are installed. The main cleaning is conducted in scrubber (different kinds of high specialized machines are used). The remaining stones, sludge and light wastes are removed at this step. Water used for washing is then purified and recycled back into th
Document 2:::
Amylopectin is a water-insoluble polysaccharide and highly branched polymer of α-glucose units found in plants. It is one of the two components of starch, the other being amylose.
Plants store starch within specialized organelles called amyloplasts. To generate energy, the plant hydrolyzes the starch, releasing the glucose subunits. Humans and other animals that eat plant foods also use amylase, an enzyme that assists in breaking down amylopectin, to initiate the hydrolyzation of starch.
Starch is made of about 70–80% amylopectin by weight, though it varies depending on the source. For example, it ranges from lower percent content in long-grain rice, amylomaize, and russet potatoes to 100% in glutinous rice, waxy potato starch, and waxy corn. Amylopectin is highly branched, being formed of 2,000 to 200,000 glucose units. Its inner chains are formed of 20–24 glucose subunits.
Dissolved amylopectin starch has a lower tendency of retrogradation (a partial recrystallization after cooking—a part of the staling process) during storage and cooling. For this main reason, the waxy starches are used in different applications mainly as a thickening agent or stabilizer.
Structure
Amylopectin is a key component in the crystallization of starch’s final configuration, accounting for 70-80% of the final mass. Composed of α-glucose, it is formed in plants as a primary measure of energy storage in tandem with this structural metric.
Amylopectin bears a straight/linear chain along with a number of side chains which may be branched further. Glucose units are linked in a linear way with α(1→4) Glycosidic bonds. Branching usually occurs at intervals of 25 residues. At the places of origin of a side chain, the branching that takes place bears an α(1→6) glycosidic bond, resulting in a soluble molecule that can be quickly degraded as it has many end points onto which enzymes can attach. Wolform and Thompson (1956) have also reported α(1→3)linkages in case of Amylopectin. Amylopectin
Document 3:::
In enzymology, a starch synthase () is an enzyme that catalyzes the chemical reaction
ADP-glucose + (1,4-alpha-D-glucosyl)n ADP + (1,4-alpha-D-glucosyl)n+1
Thus, the two substrates of this enzyme are ADP-glucose and a chain of D-glucose residues joined by 1,4-alpha-glycosidic bonds, whereas its two products are ADP and an elongated chain of glucose residues. Plants use these enzymes in the biosynthesis of starch.
This enzyme belongs to the family of hexosyltransferases, specifically the glycosyltransferases. The systematic name of this enzyme class is ADP-glucose:1,4-alpha-D-glucan 4-alpha-D-glucosyltransferase. Other names in common use include ADP-glucose-starch glucosyltransferase, adenosine diphosphate glucose-starch glucosyltransferase, adenosine diphosphoglucose-starch glucosyltransferase, ADP-glucose starch synthase, ADP-glucose synthase, ADP-glucose transglucosylase, ADP-glucose-starch glucosyltransferase, ADPG starch synthetase, and ADPG-starch glucosyltransferase
Five isoforms seems to be present. GBSS which is linked to amylose synthesis. The others are SS1, SS2, SS3 and SS4. These have different roles in amylopectin synthesis. New work implies that SS4 is important for granule initiation. (Szydlowski et al., 2011)
Structural studies
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes , , , and .
Document 4:::
Resistant starch (RS) is starch, including its degradation products, that escapes from digestion in the small intestine of healthy individuals. Resistant starch occurs naturally in foods, but it can also be added as part of dried raw foods, or used as an additive in manufactured foods.
Some types of resistant starch (RS1, RS2 and RS3) are fermented by the large intestinal microbiota, conferring benefits to human health through the production of short-chain fatty acids, increased bacterial mass, and promotion of butyrate-producing bacteria.
Resistant starch has similar physiological effects as dietary fiber, behaving as a mild laxative and possibly causing flatulence.
Origin and history
The concept of resistant starch arose from research in the 1970s and is currently considered to be one of three starch types: rapidly digested starch, slowly digested starch and resistant starch, each of which may affect levels of blood glucose.
The European Commission-supported-research eventually led to a definition of resistant starch.
Health effects
Resistant starch does not release glucose within the small intestine, but rather reaches the large intestine where it is consumed or fermented by colonic bacteria (gut microbiota). On a daily basis, human intestinal microbiota encounter more carbohydrates than any other dietary component. This includes resistant starch, non-starch polysaccharide fibers, oligosaccharides, and simple sugars which have significance in colon health.
The fermentation of resistant starch produces short-chain fatty acids, including acetate, propionate, and butyrate and increased bacterial cell mass. The short-chain fatty acids are produced in the large intestine where they are rapidly absorbed from the colon, then are metabolized in colonic epithelial cells, liver or other tissues. The fermentation of resistant starch produces more butyrate than other types of dietary fibers.
Studies have shown that resistant starch supplementation was well tolerated
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The complete hydrolysis of starch yields what?
A. glutamate
B. insulin
C. glucose
D. sucrose
Answer:
|
|
sciq-6655
|
multiple_choice
|
Greater reflection off what atmospheric layer allows am radio waves to travel even farther at night than they can during the day?
|
[
"stratosphere",
"exosphere",
"troposphere",
"ionosphere"
] |
D
|
Relavent Documents:
Document 0:::
Ionospheric absorption (ISAB) is the scientific name for absorption occurring as a result of the interaction between various types of electromagnetic waves and the free electrons in the ionosphere, which can interfere with radio transmissions.
Description
Ionosphere absorption is of critical importance when radio networks, telecommunication systems or interlinked radio systems are being planned, particularly when trying to determine propagation conditions.
The ionosphere can be described as an area of the atmosphere in which radio waves on shortwave bands are refracted or reflected back to Earth. As a result of this reflection, which is often key in the long-distance propagation of radio waves, some of the shortwave signal strength is decreased. In this regard, ISAB is the primary limiting factor in radio propagation.
Attenuation mechanics
ISAB is only a factor in the period of the day where radio signals travel through the portion of the ionosphere facing the Sun. The solar wind and radiation cause the ionosphere to become charged with electrons in the first place. At night, the atmosphere becomes drained of its charge, and radio signals can go much farther with less loss of signal. In particular, low frequency signals that would be attenuated to nothing during the day will be received much farther away at night.
The specific amount of attenuation can be derived as a function of the Inverse-square law. The lower the frequency, the greater the attenuation.
Relative ionospheric absorption can be measured using a riometer.
See also
Radio horizon
Sudden Ionospheric Disturbance
Resources
Ionosphere
Radio frequency propagation
Document 1:::
The radio window is a range of frequencies of electromagnetic radiation that penetrate the Earth's atmosphere. Typically, the lower limit of the radio window's range has a value of about 10 MHz (λ ≈ 30 m); the best upper limit achievable from optimal terrestrial observation sites is equal to approximately 1 THz (λ ≈ 0.3 mm).
It plays an important role in astronomy; up until the 1940s, astronomers could only use the visible and near infrared spectra for their measurements and observations. With the development of radio telescopes, the radio window became more and more utilizable, leading to the development of radio astronomy that provided astrophysicists with valuable observational data.
Factors affecting lower and upper limits
The lower and upper limits of the radio window's range of frequencies are not fixed; they depend on a variety of factors.
Absorption of mid-IR
The upper limit is affected by the vibrational transitions of atmospheric molecules such as oxygen (O2), carbon dioxide (CO2), and water (H2O), whose energies are comparable to the energies of mid-infrared photons: these molecules largely absorb the mid-infrared radiation that heads towards Earth.
Ionosphere
The radio window's lower frequency limit is greatly affected by the ionospheric refraction of the radio waves whose frequencies are approximately below 30 MHz (λ > 10 m); radio waves with frequencies below the limit of 10 MHz (λ > 30 m) are reflected back into space by the ionosphere. The lower limit is proportional to the density of the ionosphere's free electrons and coincides with the plasma frequency:
where is the plasma frequency in Hz and the electron density in electrons per cubic meter. Since it is highly dependent on sunlight, the value of changes significantly from daytime to nighttime usually being lower during the day, leading to a decrease of the radio window's lower limit and higher during the night, causing an increase of the radio window's lower frequency end. However, thi
Document 2:::
In infrared astronomy, the M band is an atmospheric transmission window centred on 4.7 micrometres (in the mid-infrared).
Electromagnetic spectrum
Infrared imaging
Document 3:::
Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification
Temperature versus altitude
Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere.
The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere.
Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab
Document 4:::
In infrared astronomy, the L band is an atmospheric transmission window centred on 3.5 micrometres (in the mid-infrared).
Electromagnetic spectrum
Infrared imaging
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Greater reflection off what atmospheric layer allows am radio waves to travel even farther at night than they can during the day?
A. stratosphere
B. exosphere
C. troposphere
D. ionosphere
Answer:
|
|
ai2_arc-195
|
multiple_choice
|
Why can steam be used to cook food?
|
[
"Steam does work on objects.",
"Steam is a form of water.",
"Steam can transfer heat to cooler objects.",
"Steam is able to move through small spaces."
] |
C
|
Relavent Documents:
Document 0:::
Steam Infusion is a direct-contact heating process in which steam condenses on the surface of a pumpable food product. Its primary use is for the gentle and rapid heating of a variety of food ingredients and products including milk, cream, soymilk, ketchup, soups and sauces.
Unlike steam injection and traditional vesselled steam heating; the steam infusion process surrounds the liquid food product with steam as opposed to passing steam through the liquid.
Steam Infusion allows food product to be cooked, mixed and pumped within a single unit, often removing the need for multiple stages of processing.
History
Steam infusion was first used in pasteurization and has since been developed for further liquid heating applications.
First generation
In the 1960s APV PLC launched the first steam infusion system under the Palarisator brand name. This involves a 2-stage process for steam infusion whereby the liquid is cascaded into a large pressurized steam chamber and is sterilized when falling as film or droplets through the chamber. The liquid is then condensed at the chilled bottom of the chamber. Illustrated in the image on the right hand side of the page.
Second generation
The Steam Infusion process was first developed in 2000 by Pursuit Dynamics PLC as a method for marine propulsion. The process has since been developed to be used for applications in brewing, food and beverages, public health and safety, bioenergy, industrial licensing, and waste treatment worldwide. On the right a diagram shows how the process creates an environment of vaporised product surrounded by high energy steam. The supersonic steam flow entrains and vaporises the process flow to form a multiphase flow, which heats the suspended particles by surface conduction and condensation. The condensation of the steam causes the process flow to return to a liquid state. This causes rapid and uniform heating over the unit making it applicable to industrial cooking processes. This process has been use
Document 1:::
A steam mill is a type of grinding mill using a stationary steam engine to power its mechanism.
And did those feet in ancient time, Albion Flour Mills, first steam mill in London from around 1790
Aurora Steam Grist Mill, a historic grist mill located in Aurora, Cayuga County, New York, United States
Cincinnati Steam Paper Mill, the first steam-powered mill in Cincinnati, Ohio, United States
Sutherland Steam Mill Museum, a restored steam woodworking mill from the 1890s located in Denmark, Nova Scotia, Canada
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Steam is a substance containing water in the gas phase, and sometimes also an aerosol of liquid water droplets, or air. This may occur due to evaporation or due to boiling, where heat is applied until water reaches the enthalpy of vaporization. Steam that is saturated or superheated (water vapor) is invisible; however, wet steam, a visible mist or aerosol of water droplets, is often referred to as "steam".
Water increases in volume by 1,700 times at standard temperature and pressure; this change in volume can be converted into mechanical work by steam engines such as reciprocating piston type engines and steam turbines, which are a sub-group of steam engines. Piston type steam engines played a central role in the Industrial Revolution and modern steam turbines are used to generate more than 80% of the world's electricity. If liquid water comes in contact with a very hot surface or depressurizes quickly below its vapor pressure, it can create a steam explosion.
Types of steam and conversions
Steam is traditionally created by heating a boiler via burning coal and other fuels, but it is also possible to create steam with solar energy. Water vapor that includes water droplets is described as wet steam. As wet steam is heated further, the droplets evaporate, and at a high enough temperature (which depends on the pressure) all of the water evaporates and the system is in vapor–liquid equilibrium. When steam has reached this equilibrium point, it is referred to as saturated steam.
Superheated steam or live steam is steam at a temperature higher than its boiling point for the pressure, which only occurs when all liquid water has evaporated or has been removed from the system.
Steam tables contain thermodynamic data for water/saturated steam and are often used by engineers and scientists in design and operation of equipment where thermodynamic cycles involving steam are used. Additionally, thermodynamic phase diagrams for water/steam, such as a temperature-entropy dia
Document 4:::
The term boiler may refer to an appliance for heating water. Applications include water heating and central heating.
Operation
The boiler heats water to a temperature controlled by a thermostat. The water then flows (either by natural circulation or by a pump) to radiators in the rooms which are to be heated. Water also flows through a coil in the hot water tank to heat a separate mass of water for bathing, etc.
Condensing boiler
Back boiler
A back boiler is a device which is fitted to a residential heating stove or open fireplace to enable it to provide both room heat and domestic hot water or central heating.
See also
Electric water boiler
Heat-only boiler station
Multi-fuel stove
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Why can steam be used to cook food?
A. Steam does work on objects.
B. Steam is a form of water.
C. Steam can transfer heat to cooler objects.
D. Steam is able to move through small spaces.
Answer:
|
|
ai2_arc-1096
|
multiple_choice
|
Why are coal, oil, and natural gas called fossil fuels?
|
[
"They were once fossils.",
"They were formed in prehistoric times.",
"They are used to heat our homes and businesses.",
"They formed from the remains of prehistoric plants and animals."
] |
D
|
Relavent Documents:
Document 0:::
Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound.
The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields.
Document 1:::
Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay.
The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.
Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone.
The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions.
When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel.
In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte
Document 2:::
Butane () or n-butane is an alkane with the formula C4H10. Butane is a highly flammable, colorless, easily liquefied gas that quickly vaporizes at room temperature and pressure. The name butane comes from the root but- (from butyric acid, named after the Greek word for butter) and the suffix -ane. It was discovered in crude petroleum in 1864 by Edmund Ronalds, who was the first to describe its properties, and commercialized by Walter O. Snelling in early 1910s.
Butane is one of a group of liquefied petroleum gases (LP gases). The others include propane, propylene, butadiene, butylene, isobutylene, and mixtures thereof. Butane burns more cleanly than both gasoline and coal.
History
The first synthesis of butane was accidentally achieved by British chemist Edward Frankland in 1849 from ethyl iodide and zinc, but he had not realized that the ethyl radical dimerized and misidentified the substance.
The proper discoverer of the butane called it "hydride of butyl", but already in the 1860s more names were used: "butyl hydride", "hydride of tetryl" and "tetryl hydride", "diethyl" or "ethyl ethylide" and others. August Wilhelm von Hofmann in his 1866 systemic nomenclature proposed the name "quartane", and the modern name was introduced to English from German around 1874.
Butane did not have much practical use until the 1910s, when W. Snelling identified butane and propane as components in gasoline and found that, if they were cooled, they could be stored in a volume-reduced liquified state in pressurized containers.
Density
The density of butane is highly dependent on temperature and pressure in the reservoir. For example, the density of liquid propane is 571.8±1 kg/m3 (for pressures up to 2MPa and temperature 27±0.2 °C), while the density of liquid butane is 625.5±0.7 kg/m3 (for pressures up to 2MPa and temperature -13±0.2 °C).
Isomers
Rotation about the central C−C bond produces two different conformations (trans and gauche) for n-butane.
Reactions
When oxyg
Document 3:::
A biogenic substance is a product made by or of life forms. While the term originally was specific to metabolite compounds that had toxic effects on other organisms, it has developed to encompass any constituents, secretions, and metabolites of plants or animals. In context of molecular biology, biogenic substances are referred to as biomolecules. They are generally isolated and measured through the use of chromatography and mass spectrometry techniques. Additionally, the transformation and exchange of biogenic substances can by modelled in the environment, particularly their transport in waterways.
The observation and measurement of biogenic substances is notably important in the fields of geology and biochemistry. A large proportion of isoprenoids and fatty acids in geological sediments are derived from plants and chlorophyll, and can be found in samples extending back to the Precambrian. These biogenic substances are capable of withstanding the diagenesis process in sediment, but may also be transformed into other materials. This makes them useful as biomarkers for geologists to verify the age, origin and degradation processes of different rocks.
Biogenic substances have been studied as part of marine biochemistry since the 1960s, which has involved investigating their production, transport, and transformation in the water, and how they may be used in industrial applications. A large fraction of biogenic compounds in the marine environment are produced by micro and macro algae, including cyanobacteria. Due to their antimicrobial properties they are currently the subject of research in both industrial projects, such as for anti-fouling paints, or in medicine.
History of discovery and classification
During a meeting of the New York Academy of Sciences' Section of Geology and Mineralogy in 1903, geologist Amadeus William Grabau proposed a new rock classification system in his paper 'Discussion of and Suggestions Regarding a New Classification of Rocks'. Within
Document 4:::
Glycerol dialkyl glycerol tetraether lipids (GDGTs) are a class of membrane lipids synthesized by archaea and some bacteria, making them useful biomarkers for these organisms in the geological record. Their presence, structure, and relative abundances in natural materials can be useful as proxies for temperature, terrestrial organic matter input, and soil pH for past periods in Earth history. Some structural forms of GDGT form the basis for the TEX86 paleothermometer. Isoprenoid GDGTs, now known to be synthesized by many archaeal classes, were first discovered in extremophilic archaea cultures. Branched GDGTs, likely synthesized by acidobacteriota, were first discovered in a natural Dutch peat sample in 2000.
Chemical structure
The two primary structural classes of GDGTs are isoprenoid (isoGDGT) and branched (brGDGT), which refer to differences in the carbon skeleton structures. Isoprenoid compounds are numbered -0 through -8, with the numeral representing the number of cyclopentane rings present within the carbon skeleton structure. The exception is crenarchaeol, a Nitrososphaerota product with one cyclohexane ring moiety in addition to four cyclopentane rings. Branched GDGTs have zero, one, or two cyclopentane moieties and are further classified based the positioning of their branches. They are numbered with roman numerals and letters, with -I indicating structures with four modifications (i.e. either a branch or a cyclopentane moiety), -II indicating structures with five modifications, and -III indicating structures with six modifications. The suffix a after the roman numeral means one of its modifications is a cyclopentane moiety; b means two modifications are cyclopentane moieties. For example, GDGT-IIb is a compound with three branches and two cyclopentane moieties (a total of five modifications). GDGTs form as monolayers and with ether bonds to glycerol, as opposed to as bilayers and with ester bonds as is the case in eukaryotes and most bacteria.
Biologi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Why are coal, oil, and natural gas called fossil fuels?
A. They were once fossils.
B. They were formed in prehistoric times.
C. They are used to heat our homes and businesses.
D. They formed from the remains of prehistoric plants and animals.
Answer:
|
|
sciq-7164
|
multiple_choice
|
The molecular formula is still c 4 h 10 , which is the same formula as?
|
[
"carbon hydroxide",
"propane",
"chlorine",
"butane"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
The SYBYL line notation or SLN is a specification for unambiguously describing the structure of chemical molecules using short ASCII strings. SLN differs from SMILES in several significant ways. SLN can specify molecules, molecular queries, and reactions in a single line notation whereas SMILES handles these through language extensions. SLN has support for relative stereochemistry, it can distinguish mixtures of enantiomers from pure molecules with pure but unresolved stereochemistry. In SMILES aromaticity is considered to be a property of both atoms and bonds whereas in SLN it is a property of bonds.
Description
Like SMILES, SLN is a linear language that describes molecules. This provides a lot of similarity with SMILES despite SLN's many differences from SMILES, and as a result this description will heavily compare SLN to SMILES and its extensions.
Attributes
Attributes, bracketed strings with additional data like [key1=value1, key2...], is a core feature of SLN. Attributes can be applied to atoms and bonds. Attributes not defined officially are available to users for private extensions.
When searching for molecules, comparison operators such as fcharge>-0.125 can be used in place of the usual equal sign. A ! preceding a key/value group inverts the result of the comparison.
Entire molecules or reactions can too have attributes. The square brackets are changed to a pair of <> signs.
Atoms
Anything that starts with an uppercase letter identifies an atom in SLN. Hydrogens are not automatically added, but the single bonds with hydrogen can be abbreviated for organic compounds, resulting in CH4 instead of C(H)(H)(H)H for methane. The author argues that explicit hydrogens allow for more robust parsing.
Attributes defined for atoms include I= for isotope mass number, charge= for formal charge, fcharge for partial charge, s= for stereochemistry, and spin= for radicals (s, d, t respectively for singlet, doublet, triplet). A formal charge of charge=2 can be abbrevi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The molecular formula is still c 4 h 10 , which is the same formula as?
A. carbon hydroxide
B. propane
C. chlorine
D. butane
Answer:
|
|
sciq-11308
|
multiple_choice
|
Satellite and schwann cells are the two types of what kind of cell found in the pns?
|
[
"dendritic",
"epidermal",
"osteoclast",
"glial"
] |
D
|
Relavent Documents:
Document 0:::
Nervous tissue, also called neural tissue, is the main tissue component of the nervous system. The nervous system regulates and controls body functions and activity. It consists of two parts: the central nervous system (CNS) comprising the brain and spinal cord, and the peripheral nervous system (PNS) comprising the branching peripheral nerves. It is composed of neurons, also known as nerve cells, which receive and transmit impulses, and neuroglia, also known as glial cells or glia, which assist the propagation of the nerve impulse as well as provide nutrients to the neurons.
Nervous tissue is made up of different types of neurons, all of which have an axon. An axon is the long stem-like part of the cell that sends action potentials to the next cell. Bundles of axons make up the nerves in the PNS and tracts in the CNS.
Functions of the nervous system are sensory input, integration, control of muscles and glands, homeostasis, and mental activity.
Structure
Nervous tissue is composed of neurons, also called nerve cells, and neuroglial cells. Four types of neuroglia found in the CNS are astrocytes, microglial cells, ependymal cells, and oligodendrocytes. Two types of neuroglia found in the PNS are satellite glial cells and Schwann cells. In the central nervous system (CNS), the tissue types found are grey matter and white matter. The tissue is categorized by its neuronal and neuroglial components.
Components
Neurons are cells with specialized features that allow them to receive and facilitate nerve impulses, or action potentials, across their membrane to the next neuron. They possess a large cell body (soma), with cell projections called dendrites and an axon. Dendrites are thin, branching projections that receive electrochemical signaling (neurotransmitters) to create a change in voltage in the cell. Axons are long projections that carry the action potential away from the cell body toward the next neuron. The bulb-like end of the axon, called the axon terminal, i
Document 1:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 2:::
These are timelines of brain development events in different animal species.
Mouse brain development timeline
Macaque brain development timeline
Human brain development timeline
See also
Encephalization quotient
Evolution of the brain
Neural development
External links
Translating Neurodevelopmental Time Across Mammalian Species
Vertebrate developmental biology
Embryology of nervous system
Developmental neuroscience
Document 3:::
An injury-induced stem-cell niche is a cellular microenvironments generated during tissue injury. These environments are triggered by injury and the local responses of support cells, and enable the possibility of repair by endogenous or transplanted neural stem cells. These environments have been demonstrated in several injury models, most notable in the CNS. The term was coined by Jaime Imitola and Evan Y. Snyder when they demonstrated that astrocytes and endothelial cells during stroke are able to create a permissive environment for neural regeneration, that is most striking for exogenous transplanted neural stem cells. Previous work by the Snyder Laboratory have shown that the interactions between NSCs and local cells is reciprocal, underlying a bystander beneficial effect of neural stem cells without neural differentiation, once thought to be the only mechanism for therapeutical benefit of stem cells in CNS injury.
More recently these findings have been reproduced and extended by others to different models of CNS injury, such as experimental autoimmune encephalomyelitis (EAE), a model of Multiple sclerosis, where transplanted neural stem cells persisted undifferentiated in perivascular areas, also called atypical stem cell niches, work that was done by Gianvito Martino and Stefano Pluchino.
Document 4:::
Endogenous regeneration in the brain is the ability of cells to engage in the repair and regeneration process. While the brain has a limited capacity for regeneration, endogenous neural stem cells, as well as numerous pro-regenerative molecules, can participate in replacing and repairing damaged or diseased neurons and glial cells. Another benefit that can be achieved by using endogenous regeneration could be avoiding an immune response from the host.
Neural stem cells in the adult brain
During the early development of a human, neural stem cells lie in the germinal layer of the developing brain, ventricular and subventricular zones. In brain development, multipotent stem cells (those that can generate different types of cells) are present in these regions, and all of these cells differentiate into neural cell forms, such as neurons, oligodendrocytes and astrocytes. A long-held belief states that the multipotency of neural stem cells would be lost in the adult human brain. However, it is only in vitro, using neurosphere and adherent monolayer cultures, that stem cells from the adult mammalian brain have shown multipotent capacity, while the in vivo study is not convincing. Therefore, the term "neural progenitor" is used instead of "stem cell" to describe limited regeneration ability in the adult brain stem cell.
Neural stem cells (NSC) reside in the subventricular zone (SVZ) of the adult human brain and the dentate gyrus of the adult mammalian hippocampus. Newly formed neurons from these regions participate in learning, memory, olfaction and mood modulation. It has not been definitively determined whether or not these stem cells are multipotents. NSC from the hippocampus of rodents, which can differentiate into dentate granule cells, have developed into many cell types when studied in culture. However, another in vivo study, using NSCs in the postnatal SVZ, showed that the stem cell is restricted to developing into different neuronal sub-type cells in the olfactory
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Satellite and schwann cells are the two types of what kind of cell found in the pns?
A. dendritic
B. epidermal
C. osteoclast
D. glial
Answer:
|
|
sciq-568
|
multiple_choice
|
Gas particles can move randomly in what directions?
|
[
"few directions",
"all directions",
"one direction",
"some directions"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Gas particles can move randomly in what directions?
A. few directions
B. all directions
C. one direction
D. some directions
Answer:
|
|
scienceQA-8444
|
multiple_choice
|
What do these two changes have in common?
melting wax
baking an apple pie
|
[
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by heating.",
"Both are caused by cooling."
] |
C
|
Step 1: Think about each change.
Melting wax is a change of state. So, it is a physical change. The wax changes from solid to liquid. But it is still made of the same type of matter.
Baking an apple pie is a chemical change. The type of matter in the pie changes when it is baked. The crust turns brown, and the apples become soft.
Step 2: Look at each answer choice.
Both are only physical changes.
Melting wax is a physical change. But baking a pie is not.
Both are chemical changes.
Baking a pie is a chemical change. But melting wax is not.
Both are caused by heating.
Both changes are caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
melting wax
baking an apple pie
A. Both are only physical changes.
B. Both are chemical changes.
C. Both are caused by heating.
D. Both are caused by cooling.
Answer:
|
sciq-3739
|
multiple_choice
|
What type of chemistry is the study of chemicals containing carbon called?
|
[
"bioanalytical chemistry",
"organic chemistry",
"inorganic chemistry",
"biochemistry"
] |
B
|
Relavent Documents:
Document 0:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 1:::
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Education
Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti
Document 2:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 3:::
The following outline is provided as an overview of and topical guide to biochemistry:
Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes.
Applications of biochemistry
Testing
Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans.
Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates.
Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population.
Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida.
PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation.
Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin.
Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of chemistry is the study of chemicals containing carbon called?
A. bioanalytical chemistry
B. organic chemistry
C. inorganic chemistry
D. biochemistry
Answer:
|
|
sciq-2187
|
multiple_choice
|
Which organ secretes estrogen?
|
[
"the testes",
"the ovaries",
"the thyroid",
"the kidney"
] |
B
|
Relavent Documents:
Document 0:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 1:::
Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle.
They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance.
Function
Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin.
Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition.
Additional images
Document 2:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 3:::
Membrane estrogen receptors (mERs) are a group of receptors which bind estrogen. Unlike the estrogen receptor (ER), a nuclear receptor which mediates its effects via genomic mechanisms, mERs are cell surface receptors which rapidly alter cell signaling via modulation of intracellular signaling cascades. Putative mERs include membrane-associated ERα (mERα) and ERβ (mERβ), GPER (GPR30), GPRC6A, ER-X, ERx and Gq-mER.
The mERs have been reviewed.
See also
Membrane steroid receptor
Document 4:::
ERx is a putative membrane estrogen receptor (mER) of which little is currently known. It was discovered as a gene transcription signature induced by estradiol that is independent of the ERα/ERβ and GPER and was identified using the membrane-impermeable estradiol conjugate E2-BSA in the absence or presence of the ERα/ERβ antagonist fulvestrant (ICI-182,780) and the GPER antagonist G-15.
See also
ER-X
GPER (GPR30)
Gq-mER
Estrogen receptor
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which organ secretes estrogen?
A. the testes
B. the ovaries
C. the thyroid
D. the kidney
Answer:
|
|
sciq-5102
|
multiple_choice
|
What branch of science is the study of matter and energy?
|
[
"Thermodynamics",
"physical science",
"environmental science",
"Chemistry"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 2:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What branch of science is the study of matter and energy?
A. Thermodynamics
B. physical science
C. environmental science
D. Chemistry
Answer:
|
|
sciq-8027
|
multiple_choice
|
Scientists around the world study speciation, documenting observations both of living organisms and those found in the fossil record. as their ideas take shape and as research reveals new details about how life evolves, they develop models to help explain what?
|
[
"spontaneous mutation",
"food chains",
"creation theory",
"rates of speciation"
] |
D
|
Relavent Documents:
Document 0:::
The scientific study of speciation — how species evolve to become new species — began around the time of Charles Darwin in the middle of the 19th century. Many naturalists at the time recognized the relationship between biogeography (the way species are distributed) and the evolution of species. The 20th century saw the growth of the field of speciation, with major contributors such as Ernst Mayr researching and documenting species' geographic patterns and relationships. The field grew in prominence with the modern evolutionary synthesis in the early part of that century. Since then, research on speciation has expanded immensely.
The language of speciation has grown more complex. Debate over classification schemes on the mechanisms of speciation and reproductive isolation continue. The 21st century has seen a resurgence in the study of speciation, with new techniques such as molecular phylogenetics and systematics. Speciation has largely been divided into discrete modes that correspond to rates of gene flow between two incipient populations. Current research has driven the development of alternative schemes and the discovery of new processes of speciation.
Early history
Charles Darwin introduced the idea that species could evolve and split into separate lineages, referring to it as specification in his 1859 book On the Origin of Species. It was not until 1906 that the modern term speciation was coined by the biologist Orator F. Cook. Darwin, in his 1859 publication, focused primarily on the changes that can occur within a species, and less on how species may divide into two. It is almost universally accepted that Darwin's book did not directly address its title. Darwin instead saw speciation as occurring by species entering new ecological niches.
Darwin's views
Controversy exists as to whether Charles Darwin recognized a true geographical-based model of speciation in his publication On the Origin of Species. In chapter 11, "Geographical Distribution", Darwin d
Document 1:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 2:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
Document 3:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 4:::
Many scientists and philosophers of science have described evolution as fact and theory, a phrase which was used as the title of an article by paleontologist Stephen Jay Gould in 1981. He describes fact in science as meaning data, not known with absolute certainty but "confirmed to such a degree that it would be perverse to withhold provisional assent". A scientific theory is a well-substantiated explanation of such facts. The facts of evolution come from observational evidence of current processes, from imperfections in organisms recording historical common descent, and from transitions in the fossil record. Theories of evolution provide a provisional explanation for these facts.
Each of the words evolution, fact and theory has several meanings in different contexts. In biology, evolution refers to observed changes in organisms over successive generations, to their descent from a common ancestor, and at a technical level to a change in gene frequency over time; it can also refer to explanatory theories (such as Charles Darwin's theory of natural selection) which explain the mechanisms of evolution. To a scientist, fact can describe a repeatable observation capable of great consensus; it can refer to something that is so well established that nobody in a community disagrees with it; and it can also refer to the truth or falsity of a proposition. To the public, theory can mean an opinion or conjecture (e.g., "it's only a theory"), but among scientists it has a much stronger connotation of "well-substantiated explanation". With this number of choices, people can often talk past each other, and meanings become the subject of linguistic analysis.
Evidence for evolution continues to be accumulated and tested. The scientific literature includes statements by evolutionary biologists and philosophers of science demonstrating some of the different perspectives on evolution as fact and theory.
Evolution, fact and theory
Evolution has been described as "fact and theory"; "
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Scientists around the world study speciation, documenting observations both of living organisms and those found in the fossil record. as their ideas take shape and as research reveals new details about how life evolves, they develop models to help explain what?
A. spontaneous mutation
B. food chains
C. creation theory
D. rates of speciation
Answer:
|
|
sciq-1429
|
multiple_choice
|
What is an attitude of doubt about the truthfulness of claims that lack empirical evidence?
|
[
"conspiracy",
"independent variable",
"skepticism",
"speculation"
] |
C
|
Relavent Documents:
Document 0:::
Discovery is the act of detecting something new, or something previously unrecognized as meaningful. Concerning sciences and academic disciplines, discovery is the observation of new phenomena, new actions, or new events and providing new reasoning to explain the knowledge gathered through such observations with previously acquired knowledge from abstract thought and everyday experiences. A discovery may sometimes be based on earlier discoveries, collaborations, or ideas. Some discoveries represent a radical breakthrough in knowledge or technology.
New discoveries are acquired through various senses and are usually assimilated, merging with pre-existing knowledge and actions. Questioning is a major form of human thought and interpersonal communication, and plays a key role in discovery. Discoveries are often made due to questions. Some discoveries lead to the invention of objects, processes, or techniques. A discovery may sometimes be based on earlier discoveries, collaborations or ideas, and the process of discovery requires at least the awareness that an existing concept or method can be modified or transformed. However, some discoveries also represent a radical breakthrough in knowledge.
Science
Within scientific disciplines, discovery is the observation of new phenomena, actions, or events which help explain the knowledge gathered through previously acquired scientific evidence. In science, exploration is one of three purposes of research, the other two being description and explanation. Discovery is made by providing observational evidence and attempts to develop an initial, rough understanding of some phenomenon.
Discovery within the field of particle physics has an accepted definition for what constitutes a discovery: a five-sigma level of certainty. Such a level defines statistically how unlikely it is that an experimental result is due to chance. The combination of a five-sigma level of certainty, and independent confirmation by other experiments, turn f
Document 1:::
Naïve empiricism is a term used in several ways in different fields.
In the philosophy of science, it is used by opponents to describe the position, associated with some logical positivists, that "knowledge can be clearly learnt through evaluation of the natural world and its substances, and, through empirical means, learn truths".
The term also is used to describe a particular methodology for literary analysis.
See also:
Empiricism
Falsifiability (especially, "Naïve falsification")
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Statistical literacy is the ability to understand and reason with statistics and data. The abilities to understand and reason with data, or arguments that use data, are necessary for citizens to understand material presented in publications such as newspapers, television, and the Internet. However, scientists also need to develop statistical literacy so that they can both produce rigorous and reproducible research and consume it. Numeracy is an element of being statistically literate and in some models of statistical literacy, or for some populations (e.g., students in kindergarten through 12th grade/end of secondary school), it is a prerequisite skill. Being statistically literate is sometimes taken to include having the abilities to both critically evaluate statistical material and appreciate the relevance of statistically-based approaches to all aspects of life in general or to the evaluating, design, and/or production of scientific work.
Promoting statistical literacy
Each day people are inundated with statistical information from advertisements ("4 out of 5 dentists recommend"), news reports ("opinion poll show the incumbent leading by four points"), and even general conversation ("half the time I don't know what you're talking about"). Experts and advocates often use numerical claims to bolster their arguments, and statistical literacy is a necessary skill to help one decide what experts mean and which advocates to believe. This is important because statistics can be made to produce misrepresentations of data that may seem valid. The aim of statistical literacy proponents is to improve the public understanding of numbers and figures.
Health decisions are often manifest as statistical decision problems but few doctors or patients are well equipped to engage with these data.
Results of opinion polling are often cited by news organizations, but the quality of such polls varies considerably. Some understanding of the statistical technique of sampling is nec
Document 4:::
Analytical skill is the ability to deconstruct information into smaller categories in order to draw conclusions. Analytical skill consists of categories that include logical reasoning, critical thinking, communication, research, data analysis and creativity. Analytical skill is taught in contemporary education with the intention of fostering the appropriate practises for future professions. The professions that adopt analytical skill include educational institutions, public institutions, community organisations and industry.
Richard J. Heuer Jr. explained that In the article by Freed, the need for programs within the educational system to help students develop these skills is demonstrated. Workers "will need more than elementary basic skills to maintain the standard of living of their parents. They will have to think for a living, analyse problems and solutions, and work cooperatively in teams".
Logical Reasoning
Logical reasoning is a process consisting of inferences, where premises and hypotheses are formulated to arrive at a probable conclusion. It is a broad term covering three sub-classifications in deductive reasoning, inductive reasoning and abductive reasoning.
Deductive Reasoning
‘Deductive reasoning is a basic form of valid reasoning, commencing with a general statement or hypothesis, then examines the possibilities to reach a specific, logical conclusion’. This scientific method utilises deductions, to test hypotheses and theories, to predict if possible observations were correct.
A logical deductive reasoning sequence can be executed by establishing: an assumption, followed by another assumption and finally, conducting an inference. For example, ‘All men are mortal. Harold is a man. Therefore, Harold is mortal.’
For deductive reasoning to be upheld, the hypothesis must be correct, therefore, reinforcing the notion that the conclusion is logical and true. It is possible for deductive reasoning conclusions to be inaccurate or incorrect entirely, bu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is an attitude of doubt about the truthfulness of claims that lack empirical evidence?
A. conspiracy
B. independent variable
C. skepticism
D. speculation
Answer:
|
|
sciq-11140
|
multiple_choice
|
What is the name of the process in which both the system and its environment can return to exactly the states they were in by following the reverse path?
|
[
"conductive process",
"reversible process",
"Remote process",
"Different Process"
] |
B
|
Relavent Documents:
Document 0:::
A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed.
A deterministic process is time-reversible if the time-reversed process satisfies the same dynamic equations as the original process; in other words, the equations are invariant or symmetrical under a change in the sign of time. A stochastic process is reversible if the statistical properties of the process are the same as the statistical properties for time-reversed data from the same process.
Mathematics
In mathematics, a dynamical system is time-reversible if the forward evolution is one-to-one, so that for every state there exists a transformation (an involution) π which gives a one-to-one mapping between the time-reversed evolution of any one state and the forward-time evolution of another corresponding state, given by the operator equation:
Any time-independent structures (e.g. critical points or attractors) which the dynamics give rise to must therefore either be self-symmetrical or have symmetrical images under the involution π.
Physics
In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry).
In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present, reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry.
Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process. Note, however, that the fundamental laws that underlie the thermodynamic processes are all time-reversible (classical laws of motion and laws of electrodynamics), which means that
Document 1:::
A reversible reaction is a reaction in which the conversion of reactants to products and the conversion of products to reactants occur simultaneously.
\mathit aA{} + \mathit bB <=> \mathit cC{} + \mathit dD
A and B can react to form C and D or, in the reverse reaction, C and D can react to form A and B. This is distinct from a reversible process in thermodynamics.
Weak acids and bases undergo reversible reactions. For example, carbonic acid:
H2CO3 (l) + H2O(l) ⇌ HCO3−(aq) + H3O+(aq).
The concentrations of reactants and products in an equilibrium mixture are determined by the analytical concentrations of the reagents (A and B or C and D) and the equilibrium constant, K. The magnitude of the equilibrium constant depends on the Gibbs free energy change for the reaction. So, when the free energy change is large (more than about 30 kJ mol−1), the equilibrium constant is large (log K > 3) and the concentrations of the reactants at equilibrium are very small. Such a reaction is sometimes considered to be an irreversible reaction, although small amounts of the reactants are still expected to be present in the reacting system. A truly irreversible chemical reaction is usually achieved when one of the products exits the reacting system, for example, as does carbon dioxide (volatile) in the reaction
CaCO3 + 2HCl → CaCl2 + H2O + CO2↑
History
The concept of a reversible reaction was introduced by Claude Louis Berthollet in 1803, after he had observed the formation of sodium carbonate crystals at the edge of a salt lake (one of the natron lakes in Egypt, in limestone):
2NaCl + CaCO3 → Na2CO3 + CaCl2
He recognized this as the reverse of the familiar reaction
Na2CO3 + CaCl2→ 2NaCl + CaCO3
Until then, chemical reactions were thought to always proceed in one direction. Berthollet reasoned that the excess of salt in the lake helped push the "reverse" reaction towards the formation of sodium carbonate.
In 1864, Peter Waage and Cato Maximilian Guldberg formulated their
Document 2:::
Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, wherein water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient
Document 3:::
A glossary of terms relating to systems theory.
A
Adaptive capacity: An important part of the resilience of systems in the face of a perturbation, helping to minimise loss of function in individual human, and collective social and biological systems.
Allopoiesis: The process whereby a system produces something other than the system itself.
Allostasis: The process of achieving stability, or homeostasis, through physiological or behavioral change.
Autopoiesis: The process by which a system regenerates itself through the self-reproduction of its own elements and of the network of interactions that characterize them. An autopoietic system renews, repairs, and replicates or reproduces itself in a flow of matter and energy. Note: from a strictly Maturanian point of view, autopoiesis is an essential property of biological/living systems.
B
Black box: A technical term for a device or system or object when it is viewed primarily in terms of its input and output characteristics, without observing or describing its internal structure or behaviour.
Boundaries: The parametric conditions, often vague, always subjectively stipulated, that delimit and define a system and set it apart from its environment.
C
Cascading failure: Failure in a system of interconnected parts, where the service provided depends on the operation of a preceding part, and the failure of a preceding part can trigger the failure of successive parts.
Closed system: A system which can exchange energy (as heat or work), but not matter, with its surroundings.
Complexity: A complex system is characterised by components that interact in multiple ways and follow local rules. A complicated system is characterised by its layers.
Culture: The result of individual learning processes that distinguish one social group of higher animals from another. In humans culture is the set of interrelated concepts, products and activities through which humans group themselves, interact with each other, and become aware o
Document 4:::
In thermodynamics, a reversible process is a process, involving a system and its surroundings, whose direction can be reversed by infinitesimal changes in some properties of the surroundings, such as pressure or temperature.
Throughout an entire reversible process, the system is in thermodynamic equilibrium, both physical and chemical, and nearly in pressure and temperature equilibrium with its surroundings. This prevents unbalanced forces and acceleration of moving system boundaries, which in turn avoids friction and other dissipation.
To maintain equilibrium, reversible processes are extremely slow (quasistatic). The process must occur slowly enough that after some small change in a thermodynamic parameter, the physical processes in the system have enough time for the other parameters to self-adjust to match the new, changed parameter value. For example, if a container of water has sat in a room long enough to match the steady temperature of the surrounding air, for a small change in the air temperature to be reversible, the whole system of air, water, and container must wait long enough for the container and air to settle into a new, matching temperature before the next small change can occur.
While processes in isolated systems are never reversible, cyclical processes can be reversible or irreversible. Reversible processes are hypothetical or idealized but central to the second law of thermodynamics. Melting or freezing of ice in water is an example of a realistic process that is nearly reversible.
Additionally, the system must be in (quasistatic) equilibrium with the surroundings at all time, and there must be no dissipative effects, such as friction, for a process to be considered reversible.
Reversible processes are useful in thermodynamics because they are so idealized that the equations for heat and expansion/compression work are simple. This enables the analysis of model processes, which usually define the maximum efficiency attainable in correspondin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of the process in which both the system and its environment can return to exactly the states they were in by following the reverse path?
A. conductive process
B. reversible process
C. Remote process
D. Different Process
Answer:
|
|
sciq-4453
|
multiple_choice
|
The turbine of a windmill spins and creates what?
|
[
"electricity",
"pollution",
"grains",
"lightning"
] |
A
|
Relavent Documents:
Document 0:::
A wind turbine is a device that converts the kinetic energy of wind into electrical energy. , hundreds of thousands of large turbines, in installations known as wind farms, were generating over 650 gigawatts of power, with 60 GW added each year. Wind turbines are an increasingly important source of intermittent renewable energy, and are used in many countries to lower energy costs and reduce reliance on fossil fuels. One study claimed that, wind had the "lowest relative greenhouse gas emissions, the least water consumption demands and the most favorable social impacts" compared to photovoltaic, hydro, geothermal, coal and gas energy sources.
Smaller wind turbines are used for applications such as battery charging and remote devices such as traffic warning signs. Larger turbines can contribute to a domestic power supply while selling unused power back to the utility supplier via the electrical grid.
Wind turbines are manufactured in a wide range of sizes, with either horizontal or vertical axes, though horizontal is most common.
History
The windwheel of Hero of Alexandria (10–70 CE) marks one of the first recorded instances of wind powering a machine. However, the first known practical wind power plants were built in Sistan, an Eastern province of Persia (now Iran), from the 7th century. These "Panemone" were vertical axle windmills, which had long vertical drive shafts with rectangular blades. Made of six to twelve sails covered in reed matting or cloth material, these windmills were used to grind grain or draw up water, and were used in the gristmilling and sugarcane industries.
Wind power first appeared in Europe during the Middle Ages. The first historical records of their use in England date to the 11th and 12th centuries; there are reports of German crusaders taking their windmill-making skills to Syria around 1190. By the 14th century, Dutch windmills were in use to drain areas of the Rhine delta. Advanced wind turbines were described by Croatian invent
Document 1:::
The tip-speed ratio, λ, or TSR for wind turbines is the ratio between the tangential speed of the tip of a blade and the actual speed of the wind, . The tip-speed ratio is related to efficiency, with the optimum varying with blade design. Higher tip speeds result in higher noise levels and require stronger blades due to larger centrifugal forces.
The tip speed of the blade can be calculated as times R, where is the rotational speed of the rotor in radians/second, and R is the rotor radius in metres. Therefore, we can also write:
where is the wind speed in metres/second at the height of the blade hub.
Cp–λ curves
The power coefficient, is a quantity that expresses what fraction of the power in the wind is being extracted by the wind turbine. It is generally assumed to be a function of both tip-speed ratio and pitch angle. Below is a plot of the variation of the power coefficient with variations in the tip-speed ratio when the pitch is held constant:
The case for variable speed wind turbines
Originally, wind turbines were fixed speed. This has the benefit that the rotor speed in the generator is constant, thus the frequency of the AC voltage is fixed. This allows the wind turbine to be directly connected to a transmission system. However, from the figure above, we can see that the power coefficient is a function of the tip-speed ratio. By extension, the efficiency of the wind turbine is a function of the tip-speed ratio.
Ideally, one would like to have a turbine operating at the maximum value of at all wind speeds. This means that as the wind speed changes, the rotor speed must change to such that . A wind turbine with a variable rotor speed is called a variable speed wind turbine. Whilst this does mean that the wind turbine operates at or close to for a range of wind speeds, the frequency of the AC voltage generator will not be constant. This can be seen in the following equation:
where is the rotor angular speed, is the frequency of the AC volta
Document 2:::
The Wells turbine is a low-pressure air turbine that rotates continuously in one direction independent of the direction of the air flow. Its blades feature a symmetrical airfoil with its plane of symmetry in the plane of rotation and perpendicular to the air stream.
It was developed for use in Oscillating Water Column wave power plants, in which a rising and falling water surface moving in an air compression chamber produces an oscillating air current. The use of this bidirectional turbine avoids the need to rectify the air stream by delicate and expensive check valve systems.
Its efficiency is lower than that of a turbine with constant air stream direction and asymmetric airfoil. One reason for the lower efficiency is that symmetric airfoils have a higher drag coefficient than asymmetric ones, even under optimal conditions. Also, in the Wells turbine, the symmetric airfoil runs partly under high angle of attack (i.e., low blade speed / air speed ratio), which occurs during the air velocity maxima of the oscillating flow. A high angle of attack causes a condition known as "stall" in which the airfoil loses lift. The efficiency of the Wells turbine in oscillating flow reaches values between 0.4 and 0.7.
The Wells turbine was developed by Prof. Alan Arthur Wells of Queen's University Belfast in the late 1970s.
Annotation
Another solution of the problem of stream direction independent turbine is the Darrieus wind turbine (Darrieus rotor).
See also
Siadar Wave Energy Project
Yoshio Masuda
Hanna Wave Energy Turbine
free 3D design to print your own
External links
Animation showing OWC wave power plant"
Queen's University Belfast
Mechanical engineering
Turbines
Power station technology
Renewable energy technology
Water power
Electrical generators
Document 3:::
An airborne wind turbine is a design concept for a wind turbine with a rotor supported in the air without a tower, thus benefiting from the higher velocity and persistence of wind at high altitudes, while avoiding the expense of tower construction, or the need for slip rings or yaw mechanism. An electrical generator may be on the ground or airborne. Challenges include safely suspending and maintaining turbines hundreds of meters off the ground in high winds and storms, transferring the harvested and/or generated power back to earth, and interference with aviation.
Airborne wind turbines may operate in low or high altitudes; they are part of a wider class of Airborne Wind Energy Systems (AWES) addressed by high-altitude wind power and crosswind kite power. When the generator is on the ground, then the tethered aircraft need not carry the generator mass or have a conductive tether. When the generator is aloft, then a conductive tether would be used to transmit energy to the ground or used aloft or beamed to receivers using microwave or laser. Kites and helicopters come down when there is insufficient wind; kytoons and blimps may resolve the matter with other disadvantages. Also, bad weather such as lightning or thunderstorms, could temporarily suspend use of the machines, probably requiring them to be brought back down to the ground and covered. Some schemes require a long power cable and, if the turbine is high enough, a prohibited airspace zone. As of 2022, few commercial airborne wind turbines are in regular operation.
Aerodynamic variety
An aerodynamic airborne wind power system relies on the wind for support.
In one class, the generator is aloft; an aerodynamic structure resembling a kite, tethered to the ground, extracts wind energy by supporting a wind turbine. In another class of devices, such as crosswind kite power, generators are on the ground; one or more airfoils or kites exert force on a tether, which is converted to electrical energy. An airborne tur
Document 4:::
Micropower describes the use of very small electric generators and prime movers or devices to convert heat or motion to electricity, for use close to the generator. The generator is typically integrated with microelectronic devices and produces "several watts of power or less." These devices offer the promise of a power source for portable electronic devices which is lighter weight and has a longer operating time than batteries.
Microturbine technology
The components of any turbine engine — the gas compressor, the combustion chamber, and the turbine rotor — are fabricated from etched silicon, much like integrated circuits. The technology holds the promise of ten times the operating time of a battery of the same weight as the micropower unit, and similar efficiency to large utility gas turbines. Researchers at Massachusetts Institute of Technology have thus far succeeded in fabricating the parts for such a micro turbine out of six etched and stacked silicon wafers, and are working toward combining them into a functioning engine about the size of a U.S. quarter coin.
Researchers at Georgia Tech have built a micro generator 10 mm wide, which spins a magnet above an array of coils fabricated on a silicon chip. The device spins at 100,000 revolutions per minute, and produces 1.1 watts of electrical power, sufficient to operate a cell phone. Their goal is to produce 20 to 50 watts, sufficient to power a laptop computer.
Scientists at Lehigh University are developing a hydrogen generator on a silicon chip that can convert methanol, diesel, or gasoline into fuel for a microengine or a miniature fuel cell.
Professor Sanjeev Mukerjee of Northeastern University's chemistry department is developing fuel cells for the military that will burn hydrogen to power portable electronic equipment, such as night vision goggles, computers, and communication equipment. In his system, a cartridge of methanol would be used to produce hydrogen to run a small fuel cell for up to 5,000 ho
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The turbine of a windmill spins and creates what?
A. electricity
B. pollution
C. grains
D. lightning
Answer:
|
|
sciq-7450
|
multiple_choice
|
What are the cone-like structures that contain sporangia called?
|
[
"gametes",
"medulla",
"strobili",
"contrail"
] |
C
|
Relavent Documents:
Document 0:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
Document 1:::
The posterior surfaces of the ciliary processes are covered by a bilaminar layer of black pigment cells, which is continued forward from the retina, and is named the pars ciliaris retinae.
Document 2:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 3:::
The fibroelastic coat of the spleen invests the organ, and at the hilum is reflected inward upon the vessels in the form of sheaths. From these sheaths, as well as from the inner surface of the fibroelastic coat, numerous small fibrous bands, the trabeculae of the spleen (or splenic trabeculae), emerge from all directions; these uniting, constitute the frame-work of the spleen.
The spleen therefore consists of a number of small spaces or areolae, formed by the trabeculae; in these areolae is contained the splenic pulp.
See also
Spleen
Trabecula
Document 4:::
The corpuscles of Herbst or Herbst corpuscles are nerve-endings similar to the Pacinian corpuscle, found in the mucous membrane of the tongue, in pits on the beak and in other parts of the bodies of birds. They differ from Pacinian corpuscles in being smaller and more elongated, in having thinner and more closely placed capsules, and in that the axis-cylinder in the central clear space is encircled by a continuous row of nuclei. They are named after the German embryologist Curt Alfred Herbst.
In many wading birds, a large number of Herbst corpuscles are found embedded in pits on the mandible that are believed to enable birds to sense prey under wet sand or soil.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the cone-like structures that contain sporangia called?
A. gametes
B. medulla
C. strobili
D. contrail
Answer:
|
|
sciq-1890
|
multiple_choice
|
Which carrier molecule becomes less effective at binding oxygen as temperature increases?
|
[
"water",
"helium",
"hemoglobin",
"hydrogen"
] |
C
|
Relavent Documents:
Document 0:::
Liquid oxygen is the liquid form of molecular oxygen. It is abbreviated as LOX or less frequently LOXygen in the aerospace, submarine and gas industries. It was used as the oxidizer in the first liquid-fueled rocket invented in 1926 by Robert H. Goddard, an application which has continued to the present.
Physical properties
Liquid oxygen has a light or pale cyan color and is strongly paramagnetic: it can be suspended between the poles of a powerful horseshoe magnet. Liquid oxygen has a density of , slightly denser than liquid water, and is cryogenic with a freezing point of and a boiling point of at . Liquid oxygen has an expansion ratio of 1:861 and because of this, it is used in some commercial and military aircraft as a transportable source of breathing oxygen.
Because of its cryogenic nature, liquid oxygen can cause the materials it touches to become extremely brittle. Liquid oxygen is also a very powerful oxidizing agent: organic materials will burn rapidly and energetically in liquid oxygen. Further, if soaked in liquid oxygen, some materials such as coal briquettes, carbon black, etc., can detonate unpredictably from sources of ignition such as flames, sparks or impact from light blows. Petrochemicals, including asphalt, often exhibit this behavior.
The tetraoxygen molecule (O4) was first predicted in 1924 by Gilbert N. Lewis, who proposed it to explain why liquid oxygen defied Curie's law. Modern computer simulations indicate that, although there are no stable O4 molecules in liquid oxygen, O2 molecules do tend to associate in pairs with antiparallel spins, forming transient O4 units.
Liquid nitrogen has a lower boiling point at −196 °C (77 K) than oxygen's −183 °C (90 K), and vessels containing liquid nitrogen can condense oxygen from air: when most of the nitrogen has evaporated from such a vessel, there is a risk that liquid oxygen remaining can react violently with organic material. Conversely, liquid nitrogen or liquid air can be oxygen-enriched b
Document 1:::
Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other.
Physical properties
Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue.
Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseou
Document 2:::
Cooperative binding occurs in molecular binding systems containing more than one type, or species, of molecule and in which one of the partners is not mono-valent and can bind more than one molecule of the other species. In general, molecular binding is an interaction between molecules that results in a stable physical association between those molecules.
Cooperative binding occurs in a molecular binding system where two or more ligand molecules can bind to a receptor molecule. Binding can be considered "cooperative" if the actual binding of the first molecule of the ligand to the receptor changes the binding affinity of the second ligand molecule. The binding of ligand molecules to the different sites on the receptor molecule do not constitute mutually independent events. Cooperativity can be positive or negative, meaning that it becomes more or less likely that successive ligand molecules will bind to the receptor molecule.
Cooperative binding is observed in many biopolymers, including proteins and nucleic acids. Cooperative binding has been shown to be the mechanism underlying a large range of biochemical and physiological processes.
History and mathematical formalisms
Christian Bohr and the concept of cooperative binding
In 1904, Christian Bohr studied hemoglobin binding to oxygen under different conditions. When plotting hemoglobin saturation with oxygen as a function of the partial pressure of oxygen, he obtained a sigmoidal (or "S-shaped") curve. This indicates that the more oxygen is bound to hemoglobin, the easier it is for more oxygen to bind - until all binding sites are saturated. In addition, Bohr noticed that increasing CO2 pressure shifted this curve to the right - i.e. higher concentrations of CO2 make it more difficult for hemoglobin to bind oxygen. This latter phenomenon, together with the observation that hemoglobin's affinity for oxygen increases with increasing pH, is known as the Bohr effect.
A receptor molecule is said to exhibit cooper
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
Superheated water is liquid water under pressure at temperatures between the usual boiling point, and the critical temperature, . It is also known as "subcritical water" or "pressurized hot water". Superheated water is stable because of overpressure that raises the boiling point, or by heating it in a sealed vessel with a headspace, where the liquid water is in equilibrium with vapour at the saturated vapor pressure. This is distinct from the use of the term superheating to refer to water at atmospheric pressure above its normal boiling point, which has not boiled due to a lack of nucleation sites (sometimes experienced by heating liquids in a microwave).
Many of water's anomalous properties are due to very strong hydrogen bonding. Over the superheated temperature range the hydrogen bonds break, changing the properties more than usually expected by increasing temperature alone. Water becomes less polar and behaves more like an organic solvent such as methanol or ethanol. Solubility of organic materials and gases increases by several orders of magnitude and the water itself can act as a solvent, reagent, and catalyst in industrial and analytical applications, including extraction, chemical reactions and cleaning.
Change of properties with temperature
All materials change with temperature, but superheated water exhibits greater changes than would be expected from temperature considerations alone. Viscosity and surface tension of water drop and diffusivity increases with increasing temperature.
Self-ionization of water increases with temperature, and the pKw of water at 250 °C is closer to 11 than the more familiar 14 at 25 °C. This means the concentration of hydronium ion () and the concentration of hydroxide () are increased while the pH remains neutral. Specific heat capacity at constant pressure also increases with temperature, from 4.187 kJ/kg at 25 °C to 8.138 kJ/kg at 350 °C. A significant effect on the behaviour of water at high temperatures is decreased di
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which carrier molecule becomes less effective at binding oxygen as temperature increases?
A. water
B. helium
C. hemoglobin
D. hydrogen
Answer:
|
|
sciq-3369
|
multiple_choice
|
Over time, what changes solid rock into pieces?
|
[
"weathering",
"leaching",
"creep",
"metamorphosis"
] |
A
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
Document 2:::
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle.
Transition to igneous rock
When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks.
Secondary changes
Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
Document 3:::
In geology, petrifaction or petrification () is the process by which organic material becomes a fossil through the replacement of the original material and the filling of the original pore spaces with minerals. Petrified wood typifies this process, but all organisms, from bacteria to vertebrates, can become petrified (although harder, more durable matter such as bone, beaks, and shells survive the process better than softer remains such as muscle tissue, feathers, or skin). Petrifaction takes place through a combination of two similar processes: permineralization and replacement. These processes create replicas of the original specimen that are similar down to the microscopic level.
Processes
Permineralization
One of the processes involved in petrifaction is permineralization. The fossils created through this process tend to contain a large amount of the original material of the specimen. This process occurs when groundwater containing dissolved minerals (most commonly quartz, calcite, apatite (calcium phosphate), siderite (iron carbonate), and pyrite), fills pore spaces and cavities of specimens, particularly bone, shell or wood. The pores of the organisms' tissues are filled when these minerals precipitate out of the water. Two common types of permineralization are silicification and pyritization.
Silicification
Silicification is the process in which organic matter becomes saturated with silica. A common source of silica is volcanic material. Studies have shown that in this process, most of the original organic matter is destroyed. Silicification most often occurs in two environments—either the specimen is buried in sediments of deltas and floodplains or organisms are buried in volcanic ash. Water must be present for silicification to occur because it reduces the amount of oxygen present and therefore reduces the deterioration of the organism by fungi, maintains organism shape, and allows for the transportation and deposition of silica. The process begins
Document 4:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Over time, what changes solid rock into pieces?
A. weathering
B. leaching
C. creep
D. metamorphosis
Answer:
|
|
sciq-5188
|
multiple_choice
|
What are the messenger molecules of the endocrine system?
|
[
"enzymes",
"acids",
"hormones",
"neurons"
] |
C
|
Relavent Documents:
Document 0:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 1:::
Peptide hormones are hormones whose molecules are peptides. Peptide hormones have shorter amino acid chain lengths than protein hormones. These hormones have an effect on the endocrine system of animals, including humans. Most hormones can be classified as either amino acid–based hormones (amine, peptide, or protein) or steroid hormones. The former are water-soluble and act on the surface of target cells via second messengers; the latter, being lipid-soluble, move through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei.
Like all peptides, peptide hormones are synthesized in cells from amino acids according to mRNA transcripts, which are synthesized from DNA templates inside the cell nucleus. Preprohormones, peptide hormone precursors, are then processed in several stages, typically in the endoplasmic reticulum, including removal of the N-terminal signal sequence and sometimes glycosylation, resulting in prohormones. The prohormones are then packaged into membrane-bound secretory vesicles, which can be secreted from the cell by exocytosis in response to specific stimuli (e.g. an increase in Ca2+ and cAMP concentration in cytoplasm).
These prohormones often contain superfluous amino acid residues that were needed to direct folding of the hormone molecule into its active configuration but have no function once the hormone folds. Specific endopeptidases in the cell cleave the prohormone just before it is released into the bloodstream, generating the mature hormone form of the molecule. Mature peptide hormones then travel through the blood to all of the cells of the body, where they interact with specific receptors on the surfaces of their target cells.
Some neurotransmitters are secreted and released in a similar fashion to peptide hormones, and some "neuropeptides" may be used as neurotransmitters in the nervous system in addition to acting as hormones when released into the blood.
When a peptide hormone binds to a rec
Document 2:::
In molecular biology, the crustacean neurohormone family of proteins is a family of neuropeptides expressed by arthropods. The family includes the following types of neurohormones:
Crustacean hyperglycaemic hormone (CHH). CHH is primarily involved in blood sugar regulation, but also plays a role in the control of moulting and reproduction.
Moult-inhibiting hormone (MIH). MIH inhibits Y-organs where moulting hormone (ecdysteroid) is secreted. A moulting cycle is initiated when MIH secretion diminishes or stops.
Gonad-inhibiting hormone (GIH), also known as vitellogenesis-inhibiting hormone (VIH) because of its role in inhibiting vitellogenesis in female animals.
Mandibular organ-inhibiting hormone (MOIH). MOIH represses the synthesis of methyl farnesoate, the precursor of insect juvenile hormone III in the mandibular organ.
Ion transport peptide (ITP) from locust. ITP stimulates salt and water reabsorption and inhibits acid secretion in the ileum of the locust.
Caenorhabditis elegans uncharacterised protein ZC168.2.
These neurohormones are peptides of 70 to 80 amino acid residues which are processed from larger precursors. They contain six conserved cysteines that are involved in disulfide bonds.
Document 3:::
A neurochemical is a small organic molecule or peptide that participates in neural activity. The science of neurochemistry studies the functions of neurochemicals.
Prominent neurochemicals
Neurotransmitters and neuromodulators
Glutamate is the most common neurotransmitter. Most neurons secrete with glutamate or GABA. Glutamate is excitatory, meaning that the release of glutamate by one cell usually causes adjacent cells to fire an action potential. (Note: Glutamate is chemically identical to the MSG commonly used to flavor food.)
GABA is an example of an inhibitory neurotransmitter.
Monoamine neurotransmitters:
Dopamine is a monoamine neurotransmitter. It plays a key role in the functioning of the limbic system, which is involved in emotional function and control. It also is involved in cognitive processes associated with movement, arousal, executive function, body temperature regulation, and pleasure and reward, and other processes.
Norepinephrine, also known as noradrenaline, is a monoamine neurotransmitter that is involved in arousal, pain perception, executive function, body temperature regulation, and other processes.
Epinephrine, also known as adrenaline, is a monoamine neurotransmitter that plays in fight-or-flight response, increases blood flow to muscles, output of the heart, pupil dilation, and glucose.
Serotonin is a monoamine neurotransmitter that plays a regulatory role in mood, sleep, appetite, body temperature regulation, and other processes.
Histamine is a monoamine neurotransmitter that is involved in arousal, pain, body temperature regulation, and appetite.
Trace amines act as neuromodulators in monoamine neurons via binding to TAAR1.
Acetylcholine assists motor function and is involved in memory.
Nitric oxide functions as a neurotransmitter, despite being a gas. It is not grouped with the other neurotransmitters because it is not released in the same way.
Endocannabinoids act in the endocannabinoid system to control neurotransmitter release
Document 4:::
In the human endocrine system, a spongiocyte is a cell in the zona fasciculata of the adrenal cortex containing lipid droplets that show pronounced vacuolization, due to the way the cells are prepared for microscopic examination.
The lipid droplets contain neutral fats, fatty acids, cholesterol, and phospholipids; all of which are precursors to the steroid hormones secreted by the adrenal glands. The principal hormone secreted from the cells of the zona fasciculata are glucocorticoids, but some androgens are produced as well.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the messenger molecules of the endocrine system?
A. enzymes
B. acids
C. hormones
D. neurons
Answer:
|
|
sciq-763
|
multiple_choice
|
What pathway in a plant do water and nutrients travel through from the roots to the leaves?
|
[
"skin",
"stem",
"bark",
"flowers"
] |
B
|
Relavent Documents:
Document 0:::
The soil-plant-atmosphere continuum (SPAC) is the pathway for water moving from soil through plants to the atmosphere. Continuum in the description highlights the continuous nature of water connection through the pathway. The low water potential of the atmosphere, and relatively higher (i.e. less negative) water potential inside leaves, leads to a diffusion gradient across the stomatal pores of leaves, drawing water out of the leaves as vapour. As water vapour transpires out of the leaf, further water molecules evaporate off the surface of mesophyll cells to replace the lost molecules since water in the air inside leaves is maintained at saturation vapour pressure. Water lost at the surface of cells is replaced by water from the xylem, which due to the cohesion-tension properties of water in the xylem of plants pulls additional water molecules through the xylem from the roots toward the leaf.
Components
The transport of water along this pathway occurs in components, variously defined among scientific disciplines:
Soil physics characterizes water in soil in terms of tension,
Physiology of plants and animals characterizes water in organisms in terms of diffusion pressure deficit, and
Meteorology uses vapour pressure or relative humidity to characterize atmospheric water.
SPAC integrates these components and is defined as a:
...concept recognising that the field with all its components (soil, plant, animals and the ambient atmosphere taken together) constitutes a physically integrated, dynamic system in which the various flow processes involving energy and matter occur simultaneously and independently like links in the chain.
This characterises the state of water in different components of the SPAC as expressions of the energy level or water potential of each. Modelling of water transport between components relies on SPAC, as do studies of water potential gradients between segments.
See also
Ecohydrology
Evapotranspiration
Hydraulic redistribution; a p
Document 1:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 2:::
Transfer cells are specialized parenchyma cells that have an increased surface area, due to infoldings of the plasma membrane. They facilitate the transport of sugars from a sugar source, mainly mature leaves, to a sugar sink, often developing leaves or fruits. They are found in nectaries of flowers and some carnivorous plants.
Transfer cells are specially found in plants in the region of absorption or secretion of nutrients.
The term transfer cell was coined by Brian Gunning and John Stewart Pate. Their presence is generally correlated with the existence of extensive solute influxes across the plasma membrane.
Document 3:::
In plants, the transpiration stream is the uninterrupted stream of water and solutes which is taken up by the roots and transported via the xylem to the leaves where it evaporates into the air/apoplast-interface of the substomatal cavity. It is driven by capillary action and in some plants by root pressure. The main driving factor is the difference in water potential between the soil and the substomatal cavity caused by transpiration.
Transpiration
Transpiration can be regulated through stomatal closure or opening. It allows for plants to efficiently transport water up to their highest body organs, regulate the temperature of stem and leaves and it allows for upstream signaling such as the dispersal of an apoplastic alkalinization during local oxidative stress.
Summary of water movement:
Soil
Roots and Root Hair
Xylem
Leaves
Stomata
Air
Osmosis
The water passes from the soil to the root by osmosis. The long and thin shape of root hairs maximizes surface area so that more water can enter. There is greater water potential in the soil than in the cytoplasm of the root hair cells. As the cell's surface membrane of the root hair cell is semi-permeable, osmosis can take place; and water passes from the soil to the root hairs.
The next stage in the transpiration stream is water passing into the xylem vessels. The water either goes through the cortex cells (between the root cells and the xylem vessels) or it bypasses them – going through their cell walls.
After this, the water moves up the xylem vessels to the leaves through diffusion: A pressure change between the top and bottom of the vessel. Diffusion takes place because there is a water potential gradient between water in the xylem vessel and the leaf (as water is transpiring out of the leaf). This means that water diffuses up the leaf. There is also a pressure change between the top and bottom of the xylem vessels, due to water loss from the leaves. This reduces the pressure of water at the top of the vessels. T
Document 4:::
Hydraulic redistribution is a passive mechanism where water is transported from moist to dry soils via subterranean networks. It occurs in vascular plants that commonly have roots in both wet and dry soils, especially plants with both taproots that grow vertically down to the water table, and lateral roots that sit close to the surface. In the late 1980s, there was a movement to understand the full extent of these subterranean networks. Since then it was found that vascular plants are assisted by fungal networks which grow on the root system to promote water redistribution.
Process
Hot, dry periods, when the surface soil dries out to the extent that the lateral roots exude whatever water they contain, will result in the death of such lateral roots unless the water is replaced. Similarly, under extremely wet conditions when lateral roots are inundated by flood waters, oxygen deprivation will also lead to root peril. In plants that exhibit hydraulic redistribution, there are xylem pathways from the taproots to the laterals, such that the absence or abundance of water at the laterals creates a pressure potential analogous to that of transpirational pull. In drought conditions, ground water is drawn up through the taproot to the laterals and exuded into the surface soil, replenishing that which was lost. Under flooding conditions, plant roots perform a similar function in the opposite direction.
Though often referred to as hydraulic lift, movement of water by the plant roots has been shown to occur in any direction. This phenomenon has been documented in over sixty plant species spanning a variety of plant types (from herbs and grasses to shrubs and trees) and over a range of environmental conditions (from the Kalahari Desert to the Amazon Rainforest).
Causes
The movement of this water can be explained by a water transport theory throughout a plant. This well-established water transport theory is called the cohesion-tension theory. In brief, it explains the movement
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What pathway in a plant do water and nutrients travel through from the roots to the leaves?
A. skin
B. stem
C. bark
D. flowers
Answer:
|
|
scienceQA-5502
|
multiple_choice
|
What do these two changes have in common?
a piece of avocado turning brown
a piece of pizza rotting in a trashcan
|
[
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by heating.",
"Both are caused by cooling."
] |
B
|
Step 1: Think about each change.
A piece of avocado turning brown is a chemical change. The avocado reacts with oxygen in the air to form a different type of matter.
If you scrape off the brown part of the avocado, the inside will still be green. The inside hasn't touched the air. So the chemical change hasn't happened to that part of the avocado.
A piece of pizza rotting is a chemical change. The matter in the pizza breaks down and slowly turns into a different type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
In cooking, proofing (also called proving) is a step in the preparation of yeast bread and other baked goods in which the dough is allowed to rest and rise a final time before baking. During this rest period, yeast ferments the dough and produces gases, thereby leavening the dough.
In contrast, proofing or blooming yeast (as opposed to proofing the dough) may refer to the process of first suspending yeast in warm water, a necessary hydration step when baking with active dry yeast. Proofing can also refer to the process of testing the viability of dry yeast by suspending it in warm water with carbohydrates (sugars). If the yeast is still alive, it will feed on the sugar and produce a visible layer of foam on the surface of the water mixture.
Fermentation rest periods are not always explicitly named, and can appear in recipes as "Allow dough to rise." When they are named, terms include "bulk fermentation", "first rise", "second rise", "final proof" and "shaped proof".
Dough processes
The process of making yeast-leavened bread involves a series of alternating work and rest periods. Work periods occur when the dough is manipulated by the baker. Some work periods are called mixing, kneading, and folding, as well as division, shaping, and panning. Work periods are typically followed by rest periods, which occur when dough is allowed to sit undisturbed. Particular rest periods include, but are not limited to, autolyse, bulk fermentation and proofing. Proofing, also sometimes called final fermentation, is the specific term for allowing dough to rise after it has been shaped and before it is baked.
Some breads begin mixing with an autolyse. This refers to a period of rest after the initial mixing of flour and water, a rest period that occurs sequentially before the addition of yeast, salt and other ingredients. This rest period allows for better absorption of water and helps the gluten and starches to align. The autolyse is credited to Raymond Calvel, who recommende
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
a piece of avocado turning brown
a piece of pizza rotting in a trashcan
A. Both are only physical changes.
B. Both are chemical changes.
C. Both are caused by heating.
D. Both are caused by cooling.
Answer:
|
sciq-1939
|
multiple_choice
|
The term science comes from a latin word that means?
|
[
"were knowledge",
"having information",
"only knowledge",
"having knowledge"
] |
D
|
Relavent Documents:
Document 0:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The term science comes from a latin word that means?
A. were knowledge
B. having information
C. only knowledge
D. having knowledge
Answer:
|
|
sciq-4112
|
multiple_choice
|
Where does photosynthesis occur in plants?
|
[
"golgi bodies",
"cell membrane",
"in chloroplasts",
"nucleus"
] |
C
|
Relavent Documents:
Document 0:::
{{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete
Document 1:::
Photosynthesis
Oxygenic photosynthesis uses two multi-subunit photosystems (I and II) located in the cell membranes of cyanobacteria and in the thylakoid membranes of chloroplasts in plants and algae. Photosystem II (PSII) has a P680 reaction centre containing chlorophyll 'a' that uses light energy to carr
Document 2:::
Photosynthate partitioning is the deferential distribution of photosynthates to plant tissues. A photosynthate is the resulting product of photosynthesis, these products are generally sugars. These sugars that are created from photosynthesis are broken down to create energy for use by the plant. Sugar and other compounds move via the phloem to tissues that have an energy demand. These areas of demand are called sinks. While areas with an excess of sugars and a low energy demand are called sources. Many times sinks are the actively growing tissues of the plant while the sources are where sugars are produced by photosynthesis—the leaves of plants. Sugars are actively loaded into the phloem and moved by a positive pressure flow created by solute concentrations and turgor pressure between xylem and phloem vessel elements (specialized plant cells). This movement of sugars is referred to as translocation. When sugars arrive at the sink they are unloaded for storage or broken down/metabolized.
The partitioning of these sugars depends on multiple factors such as the vascular connections that exist, the location of the sink to source, the developmental stage, and the strength of that sink. Vascular connections exist between sources and sinks and those that are the most direct have been shown to receive more photosynthates than those that must travel through extensive connections. This also goes for proximity: those closer to the source are easier to translocate sugars to. Developmental stage plays a large role in partitioning, organs that are young such as meristems and new leaves have a higher demand, as well as those that are entering reproductive maturity—creating fruits, flowers, and seeds. Many of these developing organs have a higher sink strength. Those with higher sink strengths receive more photosynthates than lower strength sinks. Sinks compete to receive these compounds and combination of factors playing in determining how much and how fast sinks rece
Document 3:::
Photosystems are functional and structural units of protein complexes involved in photosynthesis. Together they carry out the primary photochemistry of photosynthesis: the absorption of light and the transfer of energy and electrons. Photosystems are found in the thylakoid membranes of plants, algae, and cyanobacteria. These membranes are located inside the chloroplasts of plants and algae, and in the cytoplasmic membrane of photosynthetic bacteria. There are two kinds of photosystems: PSI and PSII.
PSII will absorb red light, and PSI will absorb far-red light. Although photosynthetic activity will be detected when the photosystems are exposed to either red or far-red light, the photosynthetic activity will be the greatest when plants are exposed to both wavelengths of light. Studies have actually demonstrated that the two wavelengths together have a synergistic effect on the photosynthetic activity, rather than an additive one.
Each photosystem has two parts: a reaction center, where the photochemistry occurs, and an antenna complex, which surrounds the reaction center. The antenna complex contains hundreds of chlorophyll molecules which funnel the excitation energy to the center of the photosystem. At the reaction center, the energy will be trapped and transferred to produce a high energy molecule.
The main function of PSII is to efficiently split water into oxygen molecules and protons. PSII will provide a steady stream of electrons to PSI, which will boost these in energy and transfer them to NADP and H to make NADPH. The hydrogen from this NADPH can then be used in a number of different processes within the plant.
Reaction centers
Reaction centers are multi-protein complexes found within the thylakoid membrane.
At the heart of a photosystem lies the reaction center, which is an enzyme that uses light to reduce and oxidize molecules (give off and take up electrons). This reaction center is surrounded by light-harvesting complexes that enhance the absorptio
Document 4:::
Proteinoplasts (sometimes called proteoplasts, aleuroplasts, and aleuronaplasts) are specialized organelles found only in plant cells. Proteinoplasts belong to a broad category of organelles known as plastids. Plastids are specialized double-membrane organelles found in plant cells. Plastids perform a variety of functions such as metabolism of energy, and biological reactions. There are multiple types of plastids recognized including Leucoplasts, Chromoplasts, and Chloroplasts. Plastids are broken up into different categories based on characteristics such as size, function and physical traits. Chromoplasts help to synthesize and store large amounts of carotenoids. Chloroplasts are photosynthesizing structures that help to make light energy for the plant. Leucoplasts are a colorless type of plastid which means that no photosynthesis occurs here. The colorless pigmentation of the leucoplast is due to not containing the structural components of thylakoids unlike what is found in chloroplasts and chromoplasts that gives them their pigmentation. From leucoplasts stems the subtype, proteinoplasts, which contain proteins for storage. They contain crystalline bodies of protein and can be the sites of enzyme activity involving those proteins. Proteinoplasts are found in many seeds, such as brazil nuts, peanuts and pulses. Although all plastids contain high concentrations of protein, proteinoplasts were identified in the 1960s and 1970s as having large protein inclusions that are visible with both light microscopes and electron microscopes. Other subtypes of Leucoplasts include amyloplast, and elaioplasts. Amyloplasts help to store and synthesize starch molecules found in plants, while elaioplasts synthesize and store lipids in plant cells.
See also
Chloroplast and etioplast
Chromoplast
Leucoplast
Amyloplast
Elaioplast
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does photosynthesis occur in plants?
A. golgi bodies
B. cell membrane
C. in chloroplasts
D. nucleus
Answer:
|
|
sciq-3889
|
multiple_choice
|
What are proteins made up of?
|
[
"lewis acids",
"amino acids",
"atoms acids",
"detected acids"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 1:::
Proteins are a class of biomolecules composed of amino acid chains.
Biochemistry
Antifreeze protein, class of polypeptides produced by certain fish, vertebrates, plants, fungi and bacteria
Conjugated protein, protein that functions in interaction with other chemical groups attached by covalent bonds
Denatured protein, protein which has lost its functional conformation
Matrix protein, structural protein linking the viral envelope with the virus core
Protein A, bacterial surface protein that binds antibodies
Protein A/G, recombinant protein that binds antibodies
Protein C, anticoagulant
Protein G, bacterial surface protein that binds antibodies
Protein L, bacterial surface protein that binds antibodies
Protein S, plasma glycoprotein
Protein Z, glycoprotein
Protein catabolism, the breakdown of proteins into amino acids and simple derivative compounds
Protein complex, group of two or more associated proteins
Protein electrophoresis, method of analysing a mixture of proteins by means of gel electrophoresis
Protein folding, process by which a protein assumes its characteristic functional shape or tertiary structure
Protein isoform, version of a protein with some small differences
Protein kinase, enzyme that modifies other proteins by chemically adding phosphate groups to them
Protein ligands, atoms, molecules, and ions which can bind to specific sites on proteins
Protein microarray, piece of glass on which different molecules of protein have been affixed at separate locations in an ordered manner
Protein phosphatase, enzyme that removes phosphate groups that have been attached to amino acid residues of proteins
Protein purification, series of processes intended to isolate a single type of protein from a complex mixture
Protein sequencing, protein method
Protein splicing, intramolecular reaction of a particular protein in which an internal protein segment is removed from a precursor protein
Protein structure, unique three-dimensional shape of amino
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are proteins made up of?
A. lewis acids
B. amino acids
C. atoms acids
D. detected acids
Answer:
|
|
sciq-5872
|
multiple_choice
|
What is the term for when the water of the ocean slowly rises and falls?
|
[
"dew",
"waves",
"currents",
"tides"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, with gravity and surface tension as the restoring forces. As a result, water with a free surface is generally considered to be a dispersive medium.
For a certain water depth, surface gravity waves – i.e. waves occurring at the air–water interface and gravity as the only force restoring it to flatness – propagate faster with increasing wavelength. On the other hand, for a given (fixed) wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with the behavior of gravity waves, capillary waves (i.e. only forced by surface tension) propagate faster for shorter wavelengths.
Besides frequency dispersion, water waves also exhibit amplitude dispersion. This is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves.
Frequency dispersion for surface gravity waves
This section is about frequency dispersion for waves on a fluid layer forced by gravity, and according to linear theory. For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave.
Wave propagation and dispersion
The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation η( x, t ) is given by:
where a is the amplitude (in metres) and θ = θ( x, t ) is the phase function (in radians), depending on the horizontal position ( x , in metres) and time ( t , in seconds):
with and
where:
λ is the wavelength (in metres),
T is the period (in seconds),
k is the wavenumber (in radians per metre) and
ω is the angular frequency (in radians per second).
Characteristic phases of a water wave are:
the upward zero-crossing at θ = 0,
the wave crest at θ = ½ π,
th
Document 2:::
Stable stratification of fluids occurs when each layer is less dense than the one below it. Unstable stratification is when each layer is denser than the one below it.
Buoyancy forces tend to preserve stable stratification; the higher layers float on the lower ones. In unstable stratification, on the other hand, buoyancy forces cause convection. The less-dense layers rise though the denser layers above, and the denser layers sink though the less-dense layers below. Stratifications can become more or less stable if layers change density. The processes involved are important in many science and engineering fields.
Destablization and mixing
Stable stratifications can become unstable if layers change density. This can happen due to outside influences (for instance, if water evaporates from a freshwater lens, making it saltier and denser, or if a pot or layered beverage is heated from below, making the bottom layer less dense). However, it can also happen due to internal diffusion of heat (the warmer layer slowly heats the adjacent cooler one) or other physical properties. This often causes mixing at the interface, creating new diffusive layers (see photo of coffee and milk).
Sometimes, two physical properties diffuse between layers simultaneously; salt and temperature, for instance. This may form diffusive layers or even salt fingering, when the surfaces of the diffusive layers become so wavy that there are "fingers" of layers reaching up and down.
Not all mixing is driven by density changes. Other physical forces may also mix stably-stratified layers. Sea spray and whitecaps (foaming whitewater on waves) are examples of water mixed into air, and air into water, respectively. In a fierce storm the air/water boundary may grow indistinct. Some of these wind waves are Kelvin-Helmholtz waves.
Depending on the size of the velocity difference and the size of the density contrast between the layers, Kelvin-Helmholtz waves can look different. For instance, between two l
Document 3:::
Wind-wave dissipation or "swell dissipation" is process in which a wave generated via a weather system loses its mechanical energy transferred from the atmosphere via wind. Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, capillary gravity waves play an essential role in this effect, "wind waves" or "swell" are also known as surface gravity waves.
General physics and theory
The process of wind-wave dissipation can be explained by applying energy spectrum theory in a similar manner as for the formation of wind-waves (generally assuming spectral dissipation is a function of wave spectrum). However, although even some of recent innovative improvements for field observations (such as Banner & Babanin et al. ) have contributed to solve the riddles of wave breaking behaviors, unfortunately there hasn't been a clear understanding for exact theories of the wind wave dissipation process still yet because of its non-linear behaviors.
By past and present observations and derived theories, the physics of the ocean-wave dissipation can be categorized by its passing regions along to water depth. In deep water, wave dissipation occurs by the actions of friction or drag forces such as opposite-directed winds or viscous forces generated by turbulent flows—usually nonlinear forces. In shallow water, the behaviors of wave dissipations are mostly types of shore wave breaking (see Types of wave breaking).
Some of simple general descriptions of wind-wave dissipation (defined by Luigi Cavaleri et al. ) were proposed when we consider only ocean surface waves such as wind waves. By means of the simple, the interactions of waves with the vertical structure of the upper layers of the ocean are ignored for simplified theory in many proposed mechanisms.
Sources of wind-wave dissipation
In general understanding, the physics of wave dissipation can be categorized by considering with its dissipation sources, such as 1) wa
Document 4:::
In physical oceanography, Langmuir circulation consists of a series of shallow, slow, counter-rotating vortices at the ocean's surface aligned with the wind.
These circulations are developed when wind blows steadily over the sea surface.
Irving Langmuir discovered this phenomenon after observing windrows of seaweed in the Sargasso Sea in 1927.
Langmuir circulations circulate within the mixed layer; however, it is not yet so clear how strongly they can cause mixing at the base of the mixed layer.
Theory
The driving force of these circulations is an interaction of the mean flow with wave averaged flows of the surface waves.
Stokes drift velocity of the waves stretches and tilts the vorticity of the flow near the surface.
The production of vorticity in the upper ocean is balanced by downward (often turbulent) diffusion .
For a flow driven by a wind characterized by friction velocity the ratio of vorticity diffusion and production defines the Langmuir number
where the first definition is for a monochromatic wave field of amplitude , frequency , and wavenumber and the second uses a generic inverse length scale , and Stokes velocity scale .
This is exemplified by the Craik–Leibovich equations
which are an approximation of the Lagrangian mean.
In the Boussinesq approximation the governing equations can be written
where
is the fluid velocity,
is planetary rotation,
is the Stokes drift velocity of the surface wave field,
is the pressure,
is the acceleration due to gravity,
is the density,
is the reference density,
is the viscosity, and
is the diffusivity.
In the open ocean conditions where there may not be a dominant length scale controlling the scale of the Langmuir cells the concept of Langmuir Turbulence is advanced.
Observations
The circulation has been observed to be between 0°–20° to the right of the wind in the northern hemisphere
and the helix forming bands of divergence and convergence at the surface.
At the convergence zones, there ar
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for when the water of the ocean slowly rises and falls?
A. dew
B. waves
C. currents
D. tides
Answer:
|
|
sciq-2318
|
multiple_choice
|
What kind of proteins either activate or deactivate the transcription of other genes?
|
[
"master complex proteins",
"master regulatory proteins",
"perfect regulatory proteins",
"carbon proteins"
] |
B
|
Relavent Documents:
Document 0:::
In molecular biology and genetics, transcriptional regulation is the means by which a cell regulates the conversion of DNA to RNA (transcription), thereby orchestrating gene activity. A single gene can be regulated in a range of ways, from altering the number of copies of RNA that are transcribed, to the temporal control of when the gene is transcribed. This control allows the cell or organism to respond to a variety of intra- and extracellular signals and thus mount a response. Some examples of this include producing the mRNA that encode enzymes to adapt to a change in a food source, producing the gene products involved in cell cycle specific activities, and producing the gene products responsible for cellular differentiation in multicellular eukaryotes, as studied in evolutionary developmental biology.
The regulation of transcription is a vital process in all living organisms. It is orchestrated by transcription factors and other proteins working in concert to finely tune the amount of RNA being produced through a variety of mechanisms. Bacteria and eukaryotes have very different strategies of accomplishing control over transcription, but some important features remain conserved between the two. Most importantly is the idea of combinatorial control, which is that any given gene is likely controlled by a specific combination of factors to control transcription. In a hypothetical example, the factors A and B might regulate a distinct set of genes from the combination of factors A and C. This combinatorial nature extends to complexes of far more than two proteins, and allows a very small subset (less than 10%) of the genome to control the transcriptional program of the entire cell.
In bacteria
Much of the early understanding of transcription came from bacteria, although the extent and complexity of transcriptional regulation is greater in eukaryotes. Bacterial transcription is governed by three main sequence elements:
Promoters are elements of DNA that may bind
Document 1:::
A regulator gene, regulator, or regulatory gene is a gene involved in controlling the expression of one or more other genes. Regulatory sequences, which encode regulatory genes, are often at the five prime end (5') to the start site of transcription of the gene they regulate. In addition, these sequences can also be found at the three prime end (3') to the transcription start site. In both cases, whether the regulatory sequence occurs before (5') or after (3') the gene it regulates, the sequence is often many kilobases away from the transcription start site. A regulator gene may encode a protein, or it may work at the level of RNA, as in the case of genes encoding microRNAs. An example of a regulator gene is a gene that codes for a repressor protein that inhibits the activity of an operator (a gene which binds repressor proteins thus inhibiting the translation of RNA to protein via RNA polymerase).
In prokaryotes, regulator genes often code for repressor proteins. Repressor proteins bind to operators or promoters, preventing RNA polymerase from transcribing RNA. They are usually constantly expressed so the cell always has a supply of repressor molecules on hand. Inducers cause repressor proteins to change shape or otherwise become unable to bind DNA, allowing RNA polymerase to continue transcription.
Regulator genes can be located within an operon, adjacent to it, or far away from it.
Other regulatory genes code for activator proteins. An activator binds to a site on the DNA molecule and causes an increase in transcription of a nearby gene. In prokaryotes, a well-known activator protein is the catabolite activator protein (CAP), involved in positive control of the lac operon.
In the regulation of gene expression, studied in evolutionary developmental biology (evo-devo), both activators and repressors play important roles.
Regulatory genes can also be described as positive or negative regulators, based on the environmental conditions that surround the ce
Document 2:::
The Dragon Database for Human Transcription Co-Factors and Transcription Factor Interacting Proteins (TcoF-DB) is a database that facilitates the exploration of proteins involved in the regulation of transcription in humans by binding to regulatory DNA regions (transcription factors) and proteins involved in the regulation of transcription in humans by interacting with transcription factors and not binding to regulatory DNA regions (transcription co-factors). The database describes a total of 529 (potential) human transcription co-factors interacting with a total of 1365 human transcription factors.
See also
Transcription factor
Transcription coregulator
Document 3:::
A coactivator is a type of transcriptional coregulator that binds to an activator (a transcription factor) to increase the rate of transcription of a gene or set of genes. The activator contains a DNA binding domain that binds either to a DNA promoter site or a specific DNA regulatory sequence called an enhancer. Binding of the activator-coactivator complex increases the speed of transcription by recruiting general transcription machinery to the promoter, therefore increasing gene expression. The use of activators and coactivators allows for highly specific expression of certain genes depending on cell type and developmental stage.
Some coactivators also have histone acetyltransferase (HAT) activity. HATs form large multiprotein complexes that weaken the association of histones to DNA by acetylating the N-terminal histone tail. This provides more space for the transcription machinery to bind to the promoter, therefore increasing gene expression.
Activators are found in all living organisms, but coactivator proteins are typically only found in eukaryotes because they are more complex and require a more intricate mechanism for gene regulation. In eukaryotes, coactivators are usually proteins that are localized in the nucleus.
Mechanism
Some coactivators indirectly regulate gene expression by binding to an activator and inducing a conformational change that then allows the activator to bind to the DNA enhancer or promoter sequence. Once the activator-coactivator complex binds to the enhancer, RNA polymerase II and other general transcription machinery are recruited to the DNA and transcription begins.
Histone acetyltransferase
Nuclear DNA is normally wrapped tightly around histones, making it hard or impossible for the transcription machinery to access the DNA. This association is due primarily to the electrostatic attraction between the DNA and histones as the DNA phosphate backbone is negatively charged and histones are rich in lysine residues, which are posi
Document 4:::
A regulatory sequence is a segment of a nucleic acid molecule which is capable of increasing or decreasing the expression of specific genes within an organism. Regulation of gene expression is an essential feature of all living organisms and viruses.
Description
In DNA, regulation of gene expression normally happens at the level of RNA biosynthesis (transcription). It is accomplished through the sequence-specific binding of proteins (transcription factors) that activate or inhibit transcription. Transcription factors may act as activators, repressors, or both. Repressors often act by preventing RNA polymerase from forming a productive complex with the transcriptional initiation region (promoter), while activators facilitate formation of a productive complex. Furthermore, DNA motifs have been shown to be predictive of epigenomic modifications, suggesting that transcription factors play a role in regulating the epigenome.
In RNA, regulation may occur at the level of protein biosynthesis (translation), RNA cleavage, RNA splicing, or transcriptional termination. Regulatory sequences are frequently associated with messenger RNA (mRNA) molecules, where they are used to control mRNA biogenesis or translation. A variety of biological molecules may bind to the RNA to accomplish this regulation, including proteins (e.g., translational repressors and splicing factors), other RNA molecules (e.g., miRNA) and small molecules, in the case of riboswitches.
Activation and implementation
A regulatory DNA sequence does not regulate unless it is activated. Different regulatory sequences are activated and then implement their regulation by different mechanisms.
Enhancer activation and implementation
Expression of genes in mammals can be upregulated when signals are transmitted to the promoters associated with the genes. Cis-regulatory DNA sequences that are located in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of proteins either activate or deactivate the transcription of other genes?
A. master complex proteins
B. master regulatory proteins
C. perfect regulatory proteins
D. carbon proteins
Answer:
|
|
sciq-3632
|
multiple_choice
|
Where do polychaetes live?
|
[
"the tundra",
"lakes",
"ocean floor",
"great plains"
] |
C
|
Relavent Documents:
Document 0:::
The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States.
Overview
Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15.
In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well.
UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station".
The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan
Document 1:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 2:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
William Skinner Cooper (25 August 1884 – 8 October 1978) was an American ecologist. Cooper received his B.S. in 1906 from Alma College in Michigan. In 1909, he entered graduate school at the University of Chicago, where he studied with Henry Chandler Cowles, and completed his Ph.D. in 1911. His first major publication, "The Climax Forest of Isle Royale, Lake Superior, and Its Development" appeared in 1913.
Cooper served briefly in 1914-1915 as a lecturer in plant ecology at Stanford University before beginning his long career in the botany department at the University of Minnesota, where he taught from 1915 to 1951. Among his students at Minnesota were Henry J. Oosting, Murray Fife Buell, Rexford F. Daubenmire, Frank Edwin Egler and Arnold M. Schultz; the latter went on to teach "Ecosystemology" at U.C. Berkeley, and received U.C. Berkeley's "Distinguished Teaching Award" in 1992. Cooper was the president of the Ecological Society of America in 1936 and the president of the Minnesota Academy of Science in 1937. Other professional accolades included receipt of the Botanical Society of America's Merit Award in 1956 and the Eminent Ecologist Award from the Ecological Society of America in 1963.
Cooper's travels in Glacier Bay, Alaska, compelled him to lead scientists in nominating it as a national park or monument. He also established the oldest permanent plot network in post-glacial areas in the world in 1916 in the Glacier Bay basin, now maintained by Brian Buma at the University of Colorado. At the Ecological Society of America's 1922 meeting, Cooper headed a committee that drafted a resolution adopted by the organization and sent to President Calvin Coolidge asking him to name the bay a monument. His 1935 monograph on the late glacial and postglacial environment of the Glacier Bay Basin is considered a classic. Mount Cooper in Glacier Bay is named in his honor.
The Ecological Society of America recognizes Cooper's work in the discipline by bestowing its a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do polychaetes live?
A. the tundra
B. lakes
C. ocean floor
D. great plains
Answer:
|
|
ai2_arc-963
|
multiple_choice
|
Which of the following is a SOURCE of light?
|
[
"Earth",
"planet",
"star",
"moon"
] |
C
|
Relavent Documents:
Document 0:::
Sky brightness refers to the visual perception of the sky and how it scatters and diffuses light. The fact that the sky is not completely dark at night is easily visible. If light sources (e.g. the Moon and light pollution) were removed from the night sky, only direct starlight would be visible.
The sky's brightness varies greatly over the day, and the primary cause differs as well. During daytime, when the Sun is above the horizon, the direct scattering of sunlight is the overwhelmingly dominant source of light. During twilight (the duration after sunset or before sunrise until or since, respectively, the full darkness of night), the situation is more complicated, and a further differentiation is required.
Twilight (both dusk and dawn) is divided into three 6° segments that mark the Sun's position below the horizon. At civil twilight, the center of the Sun's disk appears to be between 1/4° and 6° below the horizon. At nautical twilight, the Sun's altitude is between –6° and –12°. At astronomical twilight, the Sun is between –12° and –18°. When the Sun's depth is more than 18°, the sky generally attains its maximum darkness.
Sources of the night sky's intrinsic brightness include airglow, indirect scattering of sunlight, scattering of starlight, and light pollution.
Airglow
When physicist Anders Ångström examined the spectrum of the aurora borealis, he discovered that even on nights when the aurora was absent, its characteristic green line was still present. It was not until the 1920s that scientists were beginning to identify and understand the emission lines in aurorae and of the sky itself, and what was causing them. The green line Angstrom observed is in fact an emission line with a wavelength of 557.7 nm, caused by the recombination of oxygen in the upper atmosphere.
Airglow is the collective name of the various processes in the upper atmosphere that result in the emission of photons, with the driving force being primarily UV radiation from the Sun. Se
Document 1:::
[[File:Plane of Ecliptic.jpg|thumb|right|220px|The Moon lit by earthshine, captured by the lunar-prospecting Clementine spacecraft in 1994. Clementines camera reveals (from right to left) the Moon lit by earthshine, the Sun's glare rising over the Moon's dark limb, and the planets Saturn, Mars, and Mercury (the three dots at lower left).]]Planetshine is the dim illumination, by sunlight reflected from a planet, of all or part of the otherwise dark side of any moon orbiting the body. Planetlight is the diffuse reflection of sunlight from a planet, whose albedo can be measured.
The most observed and familiar example of planetshine is earthshine on the Moon, which is most visible from the night side of Earth when the lunar phase is crescent or nearly new, without the atmospheric brightness of the daytime sky. Typically, this results in the dark side of the Moon being bathed in a faint light.
Planetshine has also been observed elsewhere in the Solar System. In particular, the Cassini space probe used Saturn's shine to image portions of the planet's moons, even when they do not reflect direct sunlight. The New Horizons space probe similarly used Charon's shine to discover albedo variations on Pluto's dark side.
Although using a geocentric model in 510 AD, Indian mathematician and astronomer Aryabhata was the first to correctly explain how planets and moons have no light of their own, but rather shine due to the reflection of sunlight in his Aryabhatiya.
EarthshineEarthshine''' is visible earthlight reflected from the Moon's night side. It is also known as the Moon's ashen glow or as "the new Moon with the old Moon in her arm".
Earthshine is most readily visible from a few nights before until a few nights after a new moon, during the (waxing or waning) crescent phase. When the lunar phase is new as viewed from Earth, Earth would appear nearly fully sunlit from the Moon. Sunlight is reflected from Earth to the night side of the Moon. The night side appears to glow fai
Document 2:::
Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments.
Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world.
In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below).
The National Aeronautics and
Document 3:::
Earthlight is the diffuse reflection of sunlight reflected from Earth's surface and clouds. Earthshine (an example of planetshine), also known as the Moon's ashen glow, is the dim illumination of the otherwise unilluminated portion of the Moon by this indirect sunlight. Earthlight on the Moon during the waxing crescent is called "the old Moon in the new Moon's arms", while that during the waning crescent is called "the new Moon in the old Moon's arms".
Visibility
Earthlight has a calculated maximum apparent magnitude of −17.7 as viewed from the Moon. When the Earth is at maximum phase, the total radiance at the lunar surface is approximately from Earthlight. This is only 0.01% of the radiance from direct Sunlight. Earthshine has a calculated maximum apparent magnitude of −3.69 as viewed from Earth.
This phenomenon is most visible from Earth at night (or astronomical twilight) a few days before or after the day of new moon, when the lunar phase is a thin crescent. On these nights, the entire lunar disk is both directly and indirectly sunlit, and is thus unevenly bright enough to see. Earthshine is most clearly seen after dusk during the waxing crescent (in the western sky) and before dawn during the waning crescent (in the eastern sky).
The term earthlight would also be suitable for an observer on the Moon seeing Earth during the lunar night, or for an astronaut inside a spacecraft looking out the window. Arthur C. Clarke uses it in this sense in his 1955 novel Earthlight.
High contrast photography is also able to reveal the night side of the moon illuminated by Earthlight during a solar eclipse.
Radio frequency transmissions are also reflected by the moon; for example, see Earth–Moon–Earth communication.
History
The phenomenon was sketched and remarked upon in the 16th century by Leonardo da Vinci, who thought that the illumination came from reflections from the Earth's oceans (we now know that clouds account for much more reflected intensity than the oceans)
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following is a SOURCE of light?
A. Earth
B. planet
C. star
D. moon
Answer:
|
|
sciq-2475
|
multiple_choice
|
The primary role of leaves is to collect what?
|
[
"sunlight",
"insects",
"pollen",
"precipitation"
] |
A
|
Relavent Documents:
Document 0:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
Document 1:::
A leaf (: leaves) is a principal appendage of the stem of a vascular plant, usually borne laterally aboveground and specialized for photosynthesis. Leaves are collectively called foliage, as in "autumn foliage", while the leaves, stem, flower, and fruit collectively form the shoot system. In most leaves, the primary photosynthetic tissue is the palisade mesophyll and is located on the upper side of the blade or lamina of the leaf but in some species, including the mature foliage of Eucalyptus, palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll which is essential for photosynthesis as it absorbs light energy from the sun. A leaf with lighter-colored or white patches or edges is called a variegated leaf.
Leaves can have many different shapes, sizes, textures and colors. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them, the majority, as broad-leaved or megaphyllous plants, which also include acrogymnosperms and ferns. In the lycopods, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls. Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems
Document 2:::
Biomass partitioning is the process by which plants divide their energy among their leaves, stems, roots, and reproductive parts. These four main components of the plant have important morphological roles: leaves take in CO2 and energy from the sun to create carbon compounds, stems grow above competitors to reach sunlight, roots absorb water and mineral nutrients from the soil while anchoring the plant, and reproductive parts facilitate the continuation of species. Plants partition biomass in response to limits or excesses in resources like sunlight, carbon dioxide, mineral nutrients, and water and growth is regulated by a constant balance between the partitioning of biomass between plant parts. An equilibrium between root and shoot growth occurs because roots need carbon compounds from photosynthesis in the shoot and shoots need nitrogen absorbed from the soil by roots. Allocation of biomass is put towards the limit to growth; a limit below ground will focus biomass to the roots and a limit above ground will favor more growth in the shoot.
Plants photosynthesize to create carbon compounds for growth and energy storage. Sugars created through photosynthesis are then transported by phloem using the pressure flow system and are used for growth or stored for later use. Biomass partitioning causes this sugar to be divided in a way that maximizes growth, provides the most fitness, and allows for successful reproduction. Plant hormones play a large part in biomass partitioning since they affect differentiation and growth of cells and tissues by changing the expression of genes and altering morphology. By responding to environmental stimuli and partitioning biomass accordingly, plants are better able to take in resources from their environmental and maximize growth.
Abiotic Factors of Partitioning
It is important for plants to be able to balance their absorption and utilization of available resources and they adjust their growth in order to acquire more of the scarce, g
Document 3:::
Specific leaf area (SLA) is the ratio of leaf area to leaf dry mass. The inverse of SLA is Leaf Mass per Area (LMA).
Rationale
Specific leaf area is a ratio indicating how much leaf area a plant builds with a given amount of leaf biomass:
where A is the area of a given leaf or all leaves of a plant, and ML is the dry mass of those leaves. Typical units are m2kg−1 or mm2mg−1.
Leaf mass per area (LMA) is its inverse and can mathematically be decomposed in two component variables, leaf thickness (LTh) and leaf density (LD):
Typical units are g.m−2 for LMA, µm for LTh and g.ml−1 for LD.
Both SLA and LMA are frequently used in plant ecology and biology. SLA is one of the components in plant growth analysis, and mathematically scales positively and linearly with the relative growth rate of a plant. LMA mathematically scales positively with the investments plants make per unit leaf area (amount of protein and cell wall; cell number per area) and with leaf longevity. Since linear, positive relationships are more easily analysed than inverse negative relationships, researchers often use either variable, depending on the type of questions asked.
Normal Ranges
Normal ranges of SLA and LMA are species-dependent and influenced by growth environment. Table 1 gives normal ranges (~10th and ~90th percentiles) for species growing in the field, for well-illuminated leaves. Aquatic plants generally have very low LMA values, with particularly low numbers reported for species such as Myriophyllum farwelli (2.8 g.m−2) and Potamogeton perfoliatus (3.9 g. m−2). Evergreen shrubs and Gymnosperm trees as well as succulents have particularly high LMA values, with highest values reported for Aloe saponaria (2010 g.m−2) and Agave deserti (2900 g.m−2).
Application
Specific leaf area can be used to estimate the reproductive strategy of a particular plant based upon light and moisture (humidity) levels, among other factors. Specific leaf area is one of the most widely accepted key leaf chara
Document 4:::
The soil-plant-atmosphere continuum (SPAC) is the pathway for water moving from soil through plants to the atmosphere. Continuum in the description highlights the continuous nature of water connection through the pathway. The low water potential of the atmosphere, and relatively higher (i.e. less negative) water potential inside leaves, leads to a diffusion gradient across the stomatal pores of leaves, drawing water out of the leaves as vapour. As water vapour transpires out of the leaf, further water molecules evaporate off the surface of mesophyll cells to replace the lost molecules since water in the air inside leaves is maintained at saturation vapour pressure. Water lost at the surface of cells is replaced by water from the xylem, which due to the cohesion-tension properties of water in the xylem of plants pulls additional water molecules through the xylem from the roots toward the leaf.
Components
The transport of water along this pathway occurs in components, variously defined among scientific disciplines:
Soil physics characterizes water in soil in terms of tension,
Physiology of plants and animals characterizes water in organisms in terms of diffusion pressure deficit, and
Meteorology uses vapour pressure or relative humidity to characterize atmospheric water.
SPAC integrates these components and is defined as a:
...concept recognising that the field with all its components (soil, plant, animals and the ambient atmosphere taken together) constitutes a physically integrated, dynamic system in which the various flow processes involving energy and matter occur simultaneously and independently like links in the chain.
This characterises the state of water in different components of the SPAC as expressions of the energy level or water potential of each. Modelling of water transport between components relies on SPAC, as do studies of water potential gradients between segments.
See also
Ecohydrology
Evapotranspiration
Hydraulic redistribution; a p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The primary role of leaves is to collect what?
A. sunlight
B. insects
C. pollen
D. precipitation
Answer:
|
|
sciq-7932
|
multiple_choice
|
How many types of leptons are there?
|
[
"six",
"twelve",
"two",
"nine"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2.
Exam
The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories:
Purpose
According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science."
Discontinuation
Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses.
Grade distribution
The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows:
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many types of leptons are there?
A. six
B. twelve
C. two
D. nine
Answer:
|
|
sciq-8423
|
multiple_choice
|
What type of maps detail detail the types and locations of rocks found in an area?
|
[
"gnomic maps",
"contour maps",
"geologic maps",
"polar maps"
] |
C
|
Relavent Documents:
Document 0:::
Map algebra is an algebra for manipulating geographic data, primarily fields. Developed by Dr. Dana Tomlin and others in the late 1970s, it is a set of primitive operations in a geographic information system (GIS) which allows one or more raster layers ("maps") of similar dimensions to produce a new raster layer (map) using mathematical or other operations such as addition, subtraction etc.
History
Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects and city planners, starting with Warren Manning and further refined and popularized by Jaqueline Tyrwhitt, Ian McHarg and others during the 1950s and 1960s.
In the mid-1970s, landscape architecture student C. Dana Tomlin developed some of the first tools for overlay analysis in raster as part of the IMGRID project at the Harvard Laboratory for Computer Graphics and Spatial Analysis, which he eventually transformed into the Map Analysis Package (MAP), a popular raster GIS during the 1980s. While a graduate student at Yale University, Tomlin and Joseph K. Berry re-conceptualized these tools as a mathematical model, which by 1983 they were calling "map algebra." This effort was part of Tomlin's development of cartographic modeling, a technique for using these raster operations to implement the manual overlay procedures of McHarg. Although the basic operations were defined in his 1983 PhD dissertation, Tomlin had refined the principles of map algebra and cartographic modeling into their current form by 1990. Although the term cartographic modeling has not gained as wide an acceptance as synonyms such as suitability analysis, suitability modeling and multi-criteria decision making, "map algeb
Document 1:::
A cognitive map is a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment. The concept was introduced by Edward Tolman in 1948. He tried to explain the behavior of rats that appeared to learn the spatial layout of a maze, and subsequently the concept was applied to other animals, including humans. The term was later generalized by some researchers, especially in the field of operations research, to refer to a kind of semantic network representing an individual's personal knowledge or schemas.
Overview
Cognitive maps have been studied in various fields, such as psychology, education, archaeology, planning, geography, cartography, architecture, landscape architecture, urban planning, management and history. Because of the broad use and study of cognitive maps, it has become a colloquialism for almost any mental representation or model. As a consequence, these mental models are often referred to, variously, as cognitive maps, mental maps, scripts, schemata, and frame of reference.
Cognitive maps are a function of the working brain that humans and animals use for movement in a new environment. They help us in recognizing places, computing directions and distances, and in critical-thinking on shortcuts. They support us in wayfinding in an environment, and act as blueprints for new technology.
Cognitive maps serve the construction and accumulation of spatial knowledge, allowing the "mind's eye" to visualize images in order to reduce cognitive load, enhance recall and learning of information. This type of spatial thinking can also be used as a metaphor for non-spatial tasks, where people performing non-spatial tasks involving memory and imaging use spatial knowledge to aid in processing the task. They include information about the spatial relations that objects have among each other in an environment
Document 2:::
Rephotography is the act of repeat photography of the same site, with a time lag between the two images; a diachronic, "then and now" view of a particular area. Some are casual, usually taken from the same view point but without regard to season, lens coverage or framing. Some are very precise and involve a careful study of the original image.
Rephotography and photogrammetry in the sciences
Since the 1850s techniques were developed for surveying and scientific study, especially in systems (Paganini, 1880; Deville, 1889; Finsterwalder, 1890) of photogrammetry in which precise measurements made from triangulation of points in numbers of photographic records are made in order to track changes in ecological systems.
Rephotography continues to be used by the scientific world to record incremental or cyclical events (of erosion, or land rehabilitation, or glacier flow for example), or to measure the extent of sand banks in a river, or other phenomena which change slowly over time, and in gathering evidence of climate change.
In social investigation
Rephotography has also been a useful diachronic visual method for researchers in sociology and communication to understand social change. Three main approaches are common - photographs of places, participants, or activities, functions, or processes – with scholars examining elements of continuity. This method is advantageous to studying social change due to the capacity of cameras to record scenes with greater completeness and speed, to document detailed complexities at a single time, and to capture images in an unobtrusive manner. Repeat photographs offer "subtle cues about the changing character of social life". Upon analysis of elements of continuity within the images, researchers must be cautious to not make erroneous interpretations of change. Another closely related use of rephotography has been the political one made by Gustavo Germano in Argentina, who rephotographed family pictures of disappeared, thus making expl
Document 3:::
Engels Maps is a map company in the Ohio Valley with particular concentration on the Cincinnati-Dayton region. It also produces chamber of commerce maps.
Publications
It has three semi-annual publications that form its foundation:
Cincinnati Engels Guide
Dayton Engels Guide
Indianapolis Engels Guide
Their maps are also found in the Cincinnati Bell Yellow Pages and the Dayton WorkBook.
Corporate history
Engels Maps was founded by Judson Engels in 1994.
Sources
External links
Engels Maps
http://cincinnati.citysearch.com/profile/4343456/fort_thomas_ky/engels_maps_guide.html
Target Marketing
http://www.macraesbluebook.com/search/company.cfm?company=838024
http://engelsmaps.com engelsmaps.com
Geodesy
Companies based in Kentucky
Software companies based in Kentucky
American companies established in 1994
Map companies of the United States
Campbell County, Kentucky
1994 establishments in Kentucky
Software companies of the United States
Software companies established in 1994
Document 4:::
The National Geographical Organization of Iran or National Geographical Organization of the Armed Forces of Iran (, or ) is an Iranian government agency affiliated to the Ministry of Defence and Armed Forces Logistics of Iran, which has been established to prepare quick and accurate access spatial information of the country and other areas required by the Armed Forces of Iran.
History
The National Geographical Organization of Iran was officially founded in 1951 to prepare maps and survey geographical activities. Of course, origin of this mapping organization and formation of surveying and cartography branches, was laid in 1921, and in the course of its evolution, it performed responsibilities in accordance with the needs, missions and organizational duties. Finally, it is identified as the National Geographical Organization of Iran or National Geographical Organization of the Armed Forces of Iran. Currently, the National Geographical Organization of Iran, along with the National Cartographic Center, conducts affairs related to surveying and preparation and production of spatial information, but matters related to military maps, national borders and geographical services required by the Armed Forces are performed only through the National Geographical Organization of Iran.
Activities
As the first country in the Middle East, in 1955, the National Geographical Organization of Iran carried out the first analog aerial photography from all over of Iran with a scale of 1: 50,000 and then, using conversion devices (mechanical) and performing engineering steps, was able to prepare and extract topographic maps. Due to the technology conditions that were used at that time, the preparation of the map lasted until 1971 and about 2550 sheets of topographic maps were prepared and produced at a scale of 1: 50,000.
In recent years, the National Geographical Organization of Iran has been able to change the aerial photography system to take the first digital aerial photography with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of maps detail detail the types and locations of rocks found in an area?
A. gnomic maps
B. contour maps
C. geologic maps
D. polar maps
Answer:
|
|
sciq-7209
|
multiple_choice
|
What is a continuous flow of electric charge called?
|
[
"magnetism",
"microwave current",
"electric current",
"powered current"
] |
C
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted or ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electr
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
Energy current is a flow of energy defined by the Poynting vector (), as opposed to normal current (flow of charge). It was originally postulated by Oliver Heaviside. It is also an informal name for Energy flux.
Explanation
"Energy current" is a somewhat informal term that is used, on occasion, to describe the process of energy transfer in situations where the transfer can usefully be viewed in terms of a flow. It is particularly used when the transfer of energy is more significant to the discussion than the process by which the energy is transferred. For instance, the flow of fuel oil in a pipeline could be considered as an energy current, although this would not be a convenient way of visualising the fullness of the storage tanks.
The units of energy current are those of power (W). This is closely related to energy flux, which is the energy transferred per unit area per unit time (measured in W/m).
Energy current in electromagnetism
A specific use of the concept of energy current was promulgated by Oliver Heaviside in the last quarter of the 19th century. Against heavy resistance from the engineering community, Heaviside worked out the physics of signal velocity/impedance/distortion on telegraph, telephone, and undersea cables. He invented the inductor-loaded "distortionless line" later patented by Michael Pupin in the USA.
Building on the concept of the Poynting vector, which describes the flow of energy in a transverse electromagnetic wave as the vector product of its electric and magnetic fields (), Heaviside sought to extend this by treating the transfer of energy due to the electric current in a conductor in a similar manner. In doing so he reversed the contemporary view of current, so that the electric and magnetic fields due to the current are the "prime movers", rather than being a result of the motion of the charge in the conductor.
Heaviside's approach had some adherents at the time—enough, certainly, to quarrel with the "traditionalists" in p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a continuous flow of electric charge called?
A. magnetism
B. microwave current
C. electric current
D. powered current
Answer:
|
|
sciq-980
|
multiple_choice
|
Unlike plants, animal species rely almost exclusively on what type of reproduction?
|
[
"regeneration",
"multiplication",
"sexual reproduction",
"pollination"
] |
C
|
Relavent Documents:
Document 0:::
Sterile males are deliberately produced by humans in several species for several unrelated purposes:
Sterile insect technique for insect pest control
Cytoplasmic male sterility for plant breeding
Sterile male plant for plant breeding
Humans and other species
Document 1:::
Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction.
Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators.
Use of sexual terminology
Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm).
In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (inc
Document 2:::
This glossary of developmental biology is a list of definitions of terms and concepts commonly used in the study of developmental biology and related disciplines in biology, including embryology and reproductive biology, primarily as they pertain to vertebrate animals and particularly to humans and other mammals. The developmental biology of invertebrates, plants, fungi, and other organisms is treated in other articles; e.g. terms relating to the reproduction and development of insects are listed in Glossary of entomology, and those relating to plants are listed in Glossary of botany.
This glossary is intended as introductory material for novices; for more specific and technical detail, see the article corresponding to each term. Additional terms relevant to vertebrate reproduction and development may also be found in Glossary of biology, Glossary of cell biology, Glossary of genetics, and Glossary of evolutionary biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Introduction to developmental biology
Outline of developmental biology
Outline of cell biology
Glossary of biology
Glossary of cell biology
Glossary of genetics
Glossary of evolutionary biology
Document 3:::
Human reproductive ecology is a subfield in evolutionary biology that is concerned with human reproductive processes and responses to ecological variables. It is based in the natural and social sciences, and is based on theory and models deriving from human and animal biology, evolutionary theory, and ecology. It is associated with fields such as evolutionary anthropology and seeks to explain human reproductive variation and adaptations. The theoretical orientation of reproductive ecology applies the theory of natural selection to reproductive behaviors, and has also been referred to as the evolutionary ecology of human reproduction.
Theoretical foundations
Multiple theoretical foundations from evolutionary biology and evolutionary anthropology are important to human reproductive ecology. Notably, reproductive ecology relies heavily on Life History Theory, energetics, fitness theories, kin selection, and theories based on the study of animal evolution.
Life history theory
Life history theory is a prominent analytical framework used in evolutionary anthropology, biology, and reproductive ecology that seeks to explain growth and development of an organism through various life history stages of the entire lifespan. The life history stages include early growth and development, puberty, sexual development, reproductive career, and post-reproductive stage. Life history theory is based in evolutionary theory and suggests that natural selection operates on the allocation of different types of resources (material and metabolic) to meet the competing demands of growth, maintenance, and reproduction at the various life stages. Life history theory is applied to reproductive ecology in the theoretical understandings of puberty, sexual growth and maturation, fertility, parenting, and senescence because at every life stage organisms are bound to encounter and cope with unconscious and conscious decisions that hold trade-offs. Reproductive ecologists have specifically impacte
Document 4:::
Sexual maturity is the capability of an organism to reproduce. In humans, it is related to both puberty and adulthood. However, puberty is the process of biological sexual maturation, while the concept of adulthood is generally based on broader cultural definitions.
Most multicellular organisms are unable to sexually reproduce at birth (animals) or germination (e.g. plants): depending on the species, it may be days, weeks, or years until they have developed enough to be able to do so. Also, certain cues may trigger an organism to become sexually mature. They may be external, such as drought (certain plants), or internal, such as percentage of body fat (certain animals). (Such internal cues are not to be confused with hormones, which directly produce sexual maturity – the production/release of those hormones is triggered by such cues.)
Role of reproductive organs
Sexual maturity is brought about by a maturing of the reproductive organs and the production of gametes. It may also be accompanied by a growth spurt or other physical changes which distinguish the immature organism from its adult form. In animals these are termed secondary sex characteristics, and often represent an increase in sexual dimorphism.
After sexual maturity is achieved, some organisms become infertile, or even change their sex. Some organisms are hermaphrodites and may or may not be able to "completely" mature and/or to produce viable offspring. Also, while in many organisms sexual maturity is strongly linked to age, many other factors are involved, and it is possible for some to display most or all of the characteristics of the adult form without being sexually mature. Conversely it is also possible for the "immature" form of an organism to reproduce. This is called progenesis, in which sexual development occurs faster than other physiological development (in contrast, the term neoteny refers to when non-sexual development is slowed – but the result is the same - the retention of juvenile c
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Unlike plants, animal species rely almost exclusively on what type of reproduction?
A. regeneration
B. multiplication
C. sexual reproduction
D. pollination
Answer:
|
|
sciq-11343
|
multiple_choice
|
Plants can tell the time of day and time of year by sensing and using various wavelengths of what
|
[
"sunlight",
"precipitation",
"moisture",
"lunar cycles"
] |
A
|
Relavent Documents:
Document 0:::
Plant perception is the ability of plants to sense and respond to the environment by adjusting their morphology and physiology. Botanical research has revealed that plants are capable of reacting to a broad range of stimuli, including chemicals, gravity, light, moisture, infections, temperature, oxygen and carbon dioxide concentrations, parasite infestation, disease, physical disruption, sound, and touch. The scientific study of plant perception is informed by numerous disciplines, such as plant physiology, ecology, and molecular biology.
Aspects of perception
Light
Many plant organs contain photoreceptors (phototropins, cryptochromes, and phytochromes), each of which reacts very specifically to certain wavelengths of light. These light sensors tell the plant if it is day or night, how long the day is, how much light is available, and where the light is coming from. Shoots generally grow towards light, while roots grow away from it, responses known as phototropism and skototropism, respectively. They are brought about by light-sensitive pigments like phototropins and phytochromes and the plant hormone auxin.
Many plants exhibit certain behaviors at specific times of the day; for example, flowers that open only in the mornings. Plants keep track of the time of day with a circadian clock. This internal clock is synchronized with solar time every day using sunlight, temperature, and other cues, similar to the biological clocks present in other organisms. The internal clock coupled with the ability to perceive light also allows plants to measure the time of the day and so determine the season of the year. This is how many plants know when to flower (see photoperiodism). The seeds of many plants sprout only after they are exposed to light. This response is carried out by phytochrome signalling. Plants are also able to sense the quality of light and respond appropriately. For example, in low light conditions, plants produce more photosynthetic pigments. If the light i
Document 1:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
Document 2:::
In developmental biology, photomorphogenesis is light-mediated development, where plant growth patterns respond to the light spectrum. This is a completely separate process from photosynthesis where light is used as a source of energy. Phytochromes, cryptochromes, and phototropins are photochromic sensory receptors that restrict the photomorphogenic effect of light to the UV-A, UV-B, blue, and red portions of the electromagnetic spectrum.
The photomorphogenesis of plants is often studied by using tightly frequency-controlled light sources to grow the plants. There are at least three stages of plant development where photomorphogenesis occurs: seed germination, seedling development, and the switch from the vegetative to the flowering stage (photoperiodism).
Most research on photomorphogenesis is derived from plants studies involving several kingdoms: Fungi, Monera, Protista, and Plantae.
History
Theophrastus of Eresus (371 to 287 BC) may have been the first to write about photomorphogenesis. He described the different wood qualities of fir trees grown in different levels of light, likely the result of the photomorphogenic "shade-avoidance" effect. In 1686, John Ray wrote "Historia Plantarum" which mentioned the effects of etiolation (grow in the absence of light). Charles Bonnet introduced the term "etiolement" to the scientific literature in 1754 when describing his experiments, commenting that the term was already in use by gardeners.
Developmental stages affected
Seed germination
Light has profound effects on the development of plants. The most striking effects of light are observed when a germinating seedling emerges from the soil and is exposed to light for the first time.
Normally the seedling radicle (root) emerges first from the seed, and the shoot appears as the root becomes established. Later, with growth of the shoot (particularly when it emerges into the light) there is increased secondary root formation and branching. In this coordinated progressi
Document 3:::
In plant biology, plant memory describes the ability of a plant to retain information from experienced stimuli and respond at a later time. For example, some plants have been observed to raise their leaves synchronously with the rising of the sun. Other plants produce new leaves in the spring after overwintering. Many experiments have been conducted into a plant's capacity for memory, including sensory, short-term, and long-term. The most basic learning and memory functions in animals have been observed in some plant species, and it has been proposed that the development of these basic memory mechanisms may have developed in an early organismal ancestor.
Some plant species appear to have developed conserved ways to use functioning memory, and some species may have developed unique ways to use memory function depending on their environment and life history.
The use of the term plant memory still sparks controversy. Some researchers believe the function of memory only applies to organisms with a brain and others believe that comparing plant functions resembling memory to humans and other higher division organisms may be too direct of a comparison. Others argue that the function of the two are essentially the same and this comparison can serve as the basis for further understanding into how memory in plants works.
History
Experiments involving the curling of pea tendrils were some of the first to explore the concept of plant memory. Mark Jaffe recognized that pea plants coil around objects that act as support to help them grow. Jaffe’s experiments included testing different stimuli to induce coiling behavior. One such stimulus was the effect of light on the coiling mechanism. When Jaffe rubbed the tendrils in light, he witnessed the expected coiling response. When subjected to perturbation in darkness, the pea plants did not exhibit coiling behavior. Tendrils from the dark experiment were brought back into light hours later, exhibiting a coiling response without a
Document 4:::
Biometeorology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or shorter (in contrast with bioclimatology).
Examples of relevant processes
Weather events influence biological processes on short time scales. For instance, as the Sun rises above the horizon in the morning, light levels become sufficient for the process of photosynthesis to take place in plant leaves. Later on, during the day, air temperature and humidity may induce the partial or total closure of the stomata, a typical response of many plants to limit the loss of water through transpiration. More generally, the daily evolution of meteorological variables controls the circadian rhythm of plants and animals alike.
Living organisms, for their part, can collectively affect weather patterns. The rate of evapotranspiration of forests, or of any large vegetated area for that matter, contributes to the release of water vapor in the atmosphere. This local, relatively fast and continuous process may contribute significantly to the persistence of precipitations in a given area. As another example, the wilting of plants results in definite changes in leaf angle distribution and therefore modifies the rates of reflection, transmission and absorption of solar light in these plants. That, in turn, changes the albedo of the ecosystem as well as the relative importance of the sensible and latent heat fluxes from the surface to the atmosphere. For an example in oceanography, consider the release of dimethyl sulfide by biological activity in sea water and its impact on atmospheric aerosols.
Human biometeorology
The methods and measurements traditionally used in biometeorology are not different when applied to study the interactions between human bodies and the atmosphere, but some aspects or applications may have been explored more extensively. For instance, wind chill has been investigated to determine th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Plants can tell the time of day and time of year by sensing and using various wavelengths of what
A. sunlight
B. precipitation
C. moisture
D. lunar cycles
Answer:
|
|
sciq-7642
|
multiple_choice
|
Water is a versatile solvent that can dissolve many ionic and polar molecular solutes to make what?
|
[
"sulfide solutions",
"aqueous solutions",
"chloride solutions",
"sulfate solutions"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 1:::
Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other.
Physical properties
Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue.
Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseou
Document 2:::
In chemistry, solvent effects are the influence of a solvent on chemical reactivity or molecular associations. Solvents can have an effect on solubility, stability and reaction rates and choosing the appropriate solvent allows for thermodynamic and kinetic control over a chemical reaction.
A solute dissolves in a solvent when solvent-solute interactions are more favorable than solute-solute interaction.
Effects on stability
Different solvents can affect the equilibrium constant of a reaction by differential stabilization of the reactant or product. The equilibrium is shifted in the direction of the substance that is preferentially stabilized.
Stabilization of the reactant or product can occur through any of the different non-covalent interactions with the solvent such as H-bonding, dipole-dipole interactions, van der Waals interactions etc.
Acid-base equilibria
The ionization equilibrium of an acid or a base is affected by a solvent change. The effect of the solvent is not only because of its acidity or basicity but also because of its dielectric constant and its ability to preferentially solvate and thus stabilize certain species in acid-base equilibria. A change in the solvating ability or dielectric constant can thus influence the acidity or basicity.
In the table above, it can be seen that water is the most polar-solvent, followed by DMSO, and then acetonitrile. Consider the following acid dissociation equilibrium:
HA A− + H+
Water, being the most polar-solvent listed above, stabilizes the ionized species to a greater extent than does DMSO or Acetonitrile. Ionization - and, thus, acidity - would be greatest in water and lesser in DMSO and Acetonitrile, as seen in the table below, which shows pKa values at 25 °C for acetonitrile (ACN) and dimethyl sulfoxide (DMSO) and water.
Keto–enol equilibria
Many carbonyl compounds exhibit keto–enol tautomerism. This effect is especially pronounced in 1,3-dicarbonyl compounds that can form hydrogen-bonded enols. The e
Document 3:::
A hydrophile is a molecule or other molecular entity that is attracted to water molecules and tends to be dissolved by water.
In contrast, hydrophobes are not attracted to water and may seem to be repelled by it. Hygroscopics are attracted to water, but are not dissolved by water.
Molecules
A hydrophilic molecule or portion of a molecule is one whose interactions with water and other polar substances are more thermodynamically favorable than their interactions with oil or other hydrophobic solvents. They are typically charge-polarized and capable of hydrogen bonding. This makes these molecules soluble not only in water but also in other polar solvents.
Hydrophilic molecules (and portions of molecules) can be contrasted with hydrophobic molecules (and portions of molecules). In some cases, both hydrophilic and hydrophobic properties occur in a single molecule. An example of these amphiphilic molecules is the lipids that comprise the cell membrane. Another example is soap, which has a hydrophilic head and a hydrophobic tail, allowing it to dissolve in both water and oil.
Hydrophilic and hydrophobic molecules are also known as polar molecules and nonpolar molecules, respectively. Some hydrophilic substances do not dissolve. This type of mixture is called a colloid.
An approximate rule of thumb for hydrophilicity of organic compounds is that solubility of a molecule in water is more than 1 mass % if there is at least one neutral hydrophile group per 5 carbons, or at least one electrically charged hydrophile group per 7 carbons.
Hydrophilic substances (ex: salts) can seem to attract water out of the air. Sugar is also hydrophilic, and like salt is sometimes used to draw water out of foods. Sugar sprinkled on cut fruit will "draw out the water" through hydrophilia, making the fruit mushy and wet, as in a common strawberry compote recipe.
Chemicals
Liquid hydrophilic chemicals complexed with solid chemicals can be used to optimize solubility of hydrophobic chemical
Document 4:::
MOSCED (short for “modified separation of cohesive energy density" model) is a thermodynamic model for the estimation of limiting activity coefficients (also known as activity coefficient at infinite dilution). From a historical point of view MOSCED can be regarded as an improved modification of the Hansen method and the Hildebrand solubility model by adding higher interaction term such as polarity, induction and separation of hydrogen bonding terms. This allows the prediction of polar and associative compounds, which most solubility parameter models have been found to do poorly. In addition to making quantitative prediction, MOSCED can be used to understand fundamental molecular level interaction for intuitive solvent selection and formulation.
In addition to infinite dilution, MOSCED can be used to parameterize excess Gibbs Free Energy model such as NRTL, WILSON, Mod-UNIFAC to map out Vapor Liquid Equilibria of mixture. This was demonstrated briefly by Schriber and Eckert using infinite dilution data to parameterize WILSON equation.
The first publication is from 1984 and a major revision of parameters has been done 2005. This revised version is described here.
Basic principle
MOSCED uses component-specific parameters describing electronic properties of a compound. These five properties are partly derived from experimental values and partly fitted to experimental data. In addition to the five electronic properties the model uses the molar volume for every component.
These parameters are then entered in several equations to obtain the limiting activity coefficient of an infinitely diluted solute in a solvent. These equations have further parameters which have been found empirically.
The authors found an average absolute deviation of 10.6% against their database of experimental data. The database contains limiting activity coefficients of binary systems of non-polar, polar and hydrogen compounds, but no water. As can be seen in the deviation chart, the system
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Water is a versatile solvent that can dissolve many ionic and polar molecular solutes to make what?
A. sulfide solutions
B. aqueous solutions
C. chloride solutions
D. sulfate solutions
Answer:
|
|
sciq-4120
|
multiple_choice
|
Incomplete dominance and epistasis are both terms that define what?
|
[
"genetic habits",
"genetic difficulties",
"genetic relationships",
"learned behaviors"
] |
C
|
Relavent Documents:
Document 0:::
The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005).
The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways.
Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated.
Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems).
Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability).
The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics.
Document 1:::
In genetics, dominance is the phenomenon of one variant (allele) of a gene on a chromosome masking or overriding the effect of a different variant of the same gene on the other copy of the chromosome. The first variant is termed dominant and the second is called recessive. This state of having two different variants of the same gene on each chromosome is originally caused by a mutation in one of the genes, either new (de novo) or inherited. The terms autosomal dominant or autosomal recessive are used to describe gene variants on non-sex chromosomes (autosomes) and their associated traits, while those on sex chromosomes (allosomes) are termed X-linked dominant, X-linked recessive or Y-linked; these have an inheritance and presentation pattern that depends on the sex of both the parent and the child (see Sex linkage). Since there is only one copy of the Y chromosome, Y-linked traits cannot be dominant or recessive. Additionally, there are other forms of dominance, such as incomplete dominance, in which a gene variant has a partial effect compared to when it is present on both chromosomes, and co-dominance, in which different variants on each chromosome both show their associated traits.
Dominance is a key concept in Mendelian inheritance and classical genetics. Letters and Punnett squares are used to demonstrate the principles of dominance in teaching, and the use of upper-case letters for dominant alleles and lower-case letters for recessive alleles is a widely followed convention. A classic example of dominance is the inheritance of seed shape in peas. Peas may be round, associated with allele R, or wrinkled, associated with allele r. In this case, three combinations of alleles (genotypes) are possible: RR, Rr, and rr. The RR (homozygous) individuals have round peas, and the rr (homozygous) individuals have wrinkled peas. In Rr (heterozygous) individuals, the R allele masks the presence of the r allele, so these individuals also have round peas. Thus, allele R is d
Document 2:::
Hard inheritance was a model of heredity that explicitly excludes any acquired characteristics, such as of Lamarckism. It is the exact opposite of soft inheritance, coined by Ernst Mayr to contrast ideas about inheritance.
Hard inheritance states that characteristics of an organism's offspring (passed on through DNA) will not be affected by the actions that the parental organism performs during its lifetime. For example: a medieval blacksmith who uses only his right arm to forge steel will not sire a son with a stronger right arm than left because the blacksmith's actions do not alter his genetic code. Inheritance due to usage and non-usage is excluded. Inheritance works as described in the modern synthesis of evolutionary biology.
The existence of inherited epigenetic variants has led to renewed interest in soft inheritance.
Document 3:::
Epistasis is a phenomenon in genetics in which the effect of a gene mutation is dependent on the presence or absence of mutations in one or more other genes, respectively termed modifier genes. In other words, the effect of the mutation is dependent on the genetic background in which it appears. Epistatic mutations therefore have different effects on their own than when they occur together. Originally, the term epistasis specifically meant that the effect of a gene variant is masked by that of a different gene.
The concept of epistasis originated in genetics in 1907 but is now used in biochemistry, computational biology and evolutionary biology. The phenomenon arises due to interactions, either between genes (such as mutations also being needed in regulators of gene expression) or within them (multiple mutations being needed before the gene loses function), leading to non-linear effects. Epistasis has a great influence on the shape of evolutionary landscapes, which leads to profound consequences for evolution and for the evolvability of phenotypic traits.
History
Understanding of epistasis has changed considerably through the history of genetics and so too has the use of the term. The term was first used by William Bateson and his collaborators Florence Durham and Muriel Wheldale Onslow. In early models of natural selection devised in the early 20th century, each gene was considered to make its own characteristic contribution to fitness, against an average background of other genes. Some introductory courses still teach population genetics this way. Because of the way that the science of population genetics was developed, evolutionary geneticists have tended to think of epistasis as the exception. However, in general, the expression of any one allele depends in a complicated way on many other alleles.
In classical genetics, if genes A and B are mutated, and each mutation by itself produces a unique phenotype but the two mutations together show the same phenotype
Document 4:::
In statistical genetics, inclusive composite interval mapping (ICIM) has been proposed as an approach to QTL (quantitative trait locus) mapping for populations derived from bi-parental crosses. QTL mapping is based on genetic linkage map and phenotypic data to attempt to locate individual genetic factors on chromosomes and to estimate their genetic effects.
Additive and dominance QTL mapping
Two genetic assumptions used in ICIM are (1) the genotypic value of an individual is the summation of effects from all genes affecting the trait of interest; and (2) linked QTL are separated by at least one blank marker interval. Under the two assumptions, they proved that additive effect of the QTL located in a marker interval can be completely absorbed by the regression coefficients of the two flanking markers, while the QTL dominance effect causes marker dominance effects, as well as additive by additive and dominance by dominance interactions between the two flanking markers. By including two multiplication variables between flanking markers, the additive and dominance effects of one QTL can be completely absorbed. As a consequence, an inclusive linear model of phenotype regressing on all genetic markers (and marker multiplications) can be used to fit the positions and additive (and dominance) effects of all QTL in the genome. A two-step strategy was adopted in ICIM for additive and dominance QTL mapping. In the first step, stepwise regression was applied to identify the most significant marker variables in the linear model. In the second step, one-dimensional scanning or interval mapping was conducted for detecting QTL and estimating its additive and dominance effects, based on the phenotypic values adjusted by the regression model in the first step.
Genetic and statistical properties in additive QTL mapping
Computer simulations were used to study the asymptotic properties of ICIM in additive QTL mapping. The test statistic LOD score linearly increases as the increase in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Incomplete dominance and epistasis are both terms that define what?
A. genetic habits
B. genetic difficulties
C. genetic relationships
D. learned behaviors
Answer:
|
|
sciq-7144
|
multiple_choice
|
What type of dating determines which of two fossils is older or younger than the other but not their age in years?
|
[
"normal",
"constant",
"normative",
"relative"
] |
D
|
Relavent Documents:
Document 0:::
Chronology (from Latin chronologia, from Ancient Greek , chrónos, "time"; and , -logia) is the science of arranging events in their order of occurrence in time. Consider, for example, the use of a timeline or sequence of events. It is also "the determination of the actual temporal sequence of past events".
Chronology is a part of periodization. It is also a part of the discipline of history including earth history, the earth sciences, and study of the geologic time scale.
Related fields
Chronology is the science of locating historical events in time. It relies upon chronometry, which is also known as timekeeping, and historiography, which examines the writing of history and the use of historical methods. Radiocarbon dating estimates the age of formerly living things by measuring the proportion of carbon-14 isotope in their carbon content. Dendrochronology estimates the age of trees by correlation of the various growth rings in their wood to known year-by-year reference sequences in the region to reflect year-to-year climatic variation. Dendrochronology is used in turn as a calibration reference for radiocarbon dating curves.
Calendar and era
The familiar terms calendar and era (within the meaning of a coherent system of numbered calendar years) concern two complementary fundamental concepts of chronology. For example, during eight centuries the calendar belonging to the Christian era, which era was taken in use in the 8th century by Bede, was the Julian calendar, but after the year 1582 it was the Gregorian calendar. Dionysius Exiguus (about the year 500) was the founder of that era, which is nowadays the most widespread dating system on earth. An epoch is the date (year usually) when an era begins.
Ab Urbe condita era
Ab Urbe condita is Latin for "from the founding of the City (Rome)", traditionally set in 753 BC. It was used to identify the Roman year by a few Roman historians. Modern historians use it much more frequently than the Romans themselves did; the
Document 1:::
A megabias, or a taphonomic megabias, is a large-scale pattern in the quality of the fossil record that affects paleobiologic analysis at provincial to global levels and at timescales usually exceeding ten million years. It can result from major shifts in intrinsic and extrinsic properties of organisms, including morphology and behaviour in relation to other organisms, or shifts in the global environment, which can cause secular or long-term cyclic changes in preservation.
Introduction
The fossil record exhibits bias at many different levels. At the most basic level, there is a global bias towards biomineralizing organisms, because biomineralized body parts are more resistant to decay and degradation. Due to the principle of uniformitarianism, there is a basic assumption in geology that the formation of rocks has occurred by the same naturalistic processes throughout history, and thus that the reach of such biases remains stable over time. A megabias is a direct contradiction of this, whereby changes occur in large scale paleobiologic patterns. This includes:
Changes in diversity and community structure over tens of millions of years
Variation in the quality of the fossil record between mass and background extinction times
Variation among different climate states, biogeographic provinces, and tectonic settings.
It is generally assumed that the quality of the fossil record decreases globally and across all taxa with increasing age, because more time is available for the diagenesis and destruction of both fossils and enclosing rocks, and thus the term "megabias" is usually used to refer to global trends in preservation. However, it has been noted that the fossil record of some taxa actually improves with greater age. Examples such as this, and other related paleobiological trends, clearly indicate the action of a megabias, but only within one particular taxon. Hence, it is necessary to define four classes of megabias related to the reach of the bias, first defined
Document 2:::
Nitrogen dating is a form of relative dating which relies on the reliable breakdown and release of amino acids from bone samples to estimate the age of the object. For human bones, the assumption of about 5% nitrogen in the bone, mostly in the form of collagen, allows fairly consistent dating techniques.
Compared to other dating techniques, Nitrogen dating can be unreliable because leaching from bone is dependent on temperature, soil pH, ground water, and the presence of microorganism that digest nitrogen rich elements, like collagen. Some studies compare nitrogen dating results with dating results from methods like fluorine absorption dating to create more accurate estimates. Though some situations, like thin porous bones might more rapidly change the dating created by multiple methods.
Document 3:::
A chronozone or chron is a unit in chronostratigraphy, defined by events such as
geomagnetic reversals (magnetozones), or based on the presence of specific fossils (biozone or biochronozone).
According to the International Commission on Stratigraphy, the term "chronozone" refers to the rocks formed during a particular time period, while "chron" refers to that time period.
Although non-hierarchical, chronozones have been recognized as useful markers or benchmarks of time in the rock record. Chronozones are non-hierarchical in that chronozones do not need to correspond across geographic or geologic boundaries, nor be equal in length. Although a former, early constraint required that a chronozone be defined as smaller than a geological stage. Another early use was hierarchical in that Harland et al. (1989) used "chronozone" for the slice of time smaller than a faunal stage defined in biostratigraphy.
The ICS superseded these earlier usages in 1994.
The key factor in designating an internationally acceptable chronozone is whether the overall fossil column is clear, unambiguous, and widespread. Some accepted chronozones contain others, and certain larger chronozones have been designated which span whole defined geological time units, both large and small.
For example, the chronozone Pliocene is a subset of the chronozone Neogene, and the chronozone Pleistocene is a subset of the chronozone Quaternary.
See also
Body form
Chronology (geology)
European Mammal Neogene
Geologic time scale
North American Land Mammal Age
Type locality (geology)
List of GSSPs
Document 4:::
In biostratigraphy, biostratigraphic units or biozones are intervals of geological strata that are defined on the basis of their characteristic fossil taxa, as opposed to a lithostratigraphic unit which is defined by the lithological properties of the surrounding rock.
A biostratigraphic unit is defined by the zone fossils it contains. These may be a single taxon or combinations of taxa if the taxa are relatively abundant, or variations in features related to the distribution of fossils. The same strata may be zoned differently depending on the diagnostic criteria or fossil group chosen, so there may be several, sometimes overlapping, biostratigraphic units in the same interval. Like lithostratigraphic units, biozones must have a type section designated as a stratotype. These stratotypes are named according to the typical taxon (or taxa) that are found in that particular biozone.
The boundary of two distinct biostratigraphic units is called a biohorizon. Biozones can be further subdivided into subbiozones, and multiple biozones can be grouped together in a superbiozone in which the grouped biozones usually have a related characteristic. A succession of biozones is called biozonation. The length of time represented by a biostratigraphic zone is called a biochron.
History
The concept of a biozone was first established by the 19th century paleontologist Albert Oppel, who characterized rock strata by the species of the fossilized animals found in them, which he called zone fossils. Oppel's biozonation was mainly based on Jurassic ammonites he found throughout Europe, which he used to classify the period into 33 zones (now 60). Alcide d'Orbigny would further reinforce the concept in his Prodrome de Paléontologie Stratigraphique, in which he established comparisons between geological stages and their biostratigraphy.
Types of biozone
The International Commission on Stratigraphy defines the following types of biozones:
Range zones
Range zones are biozones defined b
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of dating determines which of two fossils is older or younger than the other but not their age in years?
A. normal
B. constant
C. normative
D. relative
Answer:
|
|
sciq-3157
|
multiple_choice
|
What is made up of organisms of the same species that live in the same area?
|
[
"tissue",
"population",
"system",
"countries"
] |
B
|
Relavent Documents:
Document 0:::
This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
X
Y
Z
Related to this search
Index of biology articles
Outline of biology
Glossaries of sub-disciplines and related fields:
Glossary of botany
Glossary of ecology
Glossary of entomology
Glossary of environmental science
Glossary of genetics
Glossary of ichthyology
Glossary of ornithology
Glossary of scientific naming
Glossary of speciation
Glossary of virology
Document 1:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 2:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 3:::
Ecological units, comprise concepts such as population, community, and ecosystem as the basic units, which are at the basis of ecological theory and research, as well as a focus point of many conservation strategies. The concept of ecological units continues to suffer from inconsistencies and confusion over its terminology. Analyses of the existing concepts used in describing ecological units have determined that they differ in respects to four major criteria:
The questions as to whether they are defined statistically or via a network of interactions,
If their boundaries are drawn by topographical or process-related criteria,
How high the required internal relationships are,
And if they are perceived as "real" entities or abstractions by an observer.
A population is considered to be the smallest ecological unit, consisting of a group of individuals that belong to the same species. A community would be the next classification, referring to all of the population present in an area at a specific time, followed by an ecosystem, referring to the community and it's interactions with its physical environment. An ecosystem is the most commonly used ecological unit and can be universally defined by two common traits:
The unit is often defined in terms of a natural border (maritime boundary, watersheds, etc.)
Abiotic components and organisms within the unit are considered to be interlinked.
See also
Biogeographic realm
Ecoregion
Ecotope
Holobiont
Functional ecology
Behavior settings
Regional geology
Document 4:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is made up of organisms of the same species that live in the same area?
A. tissue
B. population
C. system
D. countries
Answer:
|
|
sciq-3422
|
multiple_choice
|
Where do most amphibians live, salt water or fresh water?
|
[
"fresh water",
"saltwater",
"deserts",
"aquariums"
] |
A
|
Relavent Documents:
Document 0:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 1:::
The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States.
Overview
Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15.
In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well.
UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station".
The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do most amphibians live, salt water or fresh water?
A. fresh water
B. saltwater
C. deserts
D. aquariums
Answer:
|
|
sciq-7560
|
multiple_choice
|
What is the term for the number that describes an electron's orbital orientation in space?
|
[
"magnetic electron number",
"relative quantum number",
"imaging quantum number",
"magnetic quantum number"
] |
D
|
Relavent Documents:
Document 0:::
In atomic physics, a magnetic quantum number is a quantum number used to distinguish quantum states of an electron or other particle according to its angular momentum along a given axis in space. The orbital magnetic quantum number ( or ) distinguishes the orbitals available within a given subshell of an atom. It specifies the component of the orbital angular momentum that lies along a given axis, conventionally called the z-axis, so it describes the orientation of the orbital in space. The spin magnetic quantum number specifies the z-axis component of the spin angular momentum for a particle having spin quantum number . For an electron, is , and is either + or −, often called "spin-up" and "spin-down", or α and β. The term magnetic in the name refers to the magnetic dipole moment associated with each type of angular momentum, so states having different magnetic quantum numbers shift in energy in a magnetic field according to the Zeeman effect.
The four quantum numbers conventionally used to describe the quantum state of an electron in an atom are the principal quantum number n, the azimuthal (orbital) quantum number , and the magnetic quantum numbers and . Electrons in a given subshell of an atom (such as s, p, d, or f) are defined by values of (0, 1, 2, or 3). The orbital magnetic quantum number takes integer values in the range from to , including zero. Thus the s, p, d, and f subshells contain 1, 3, 5, and 7 orbitals each, with values of within the ranges 0, ±1, ±2, ±3 respectively. Each of these orbitals can accommodate up to two electrons (with opposite spins), forming the basis of the periodic table.
Other magnetic quantum numbers are similarly defined, such as for the z-axis component the total electronic angular momentum , and for the nuclear spin . Magnetic quantum numbers are capitalized to indicate totals for a system of particles, such as or for the total z-axis orbital angular momentum of all the electrons in an atom.
Derivation
There i
Document 1:::
In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes the shape of the orbital. The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ). It is also known as the orbital angular momentum quantum number, orbital quantum number, subsidiary quantum number, or second quantum number, and is symbolized as (pronounced ell).
Derivation
Connected with the energy states of the atom's electrons are four quantum numbers: n, ℓ, mℓ, and ms. These specify the complete, unique quantum state of a single electron in an atom, and make up its wavefunction or orbital. When solving to obtain the wave function, the Schrödinger equation reduces to three equations that lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The azimuthal quantum number arose in the solution of the polar part of the wave equation as shown below , reliant on the spherical coordinate system, which generally works best with models having some glimpse of spherical symmetry.
An atomic electron's angular momentum, , is related to its quantum number by the following equation:
where is the reduced Planck's constant, is the orbital angular momentum operator and is the wavefunction of the electron. The quantum number is always a non-negative integer: 0, 1, 2, 3, etc. has no real meaning except in its use as the angular momentum operator. When referring to angular momentum, it is better to simply use the quantum number .
Atomic orbitals have distinctive shapes denoted by letters. In the illustration, the letters s, p, and d (a convention originating in spectroscopy) describe the shape of the atomic orbital.
Their wavefunctions take the form of spherical harmonics, and
Document 2:::
In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers assigned to each electron in an atom to describe that electron's state. Its values are natural numbers (from 1) making it a discrete variable.
Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number ℓ, the magnetic quantum number ml, and the spin quantum number s.
Overview and history
As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n the electron is farther from the nucleus, on average. For each value of n there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 inclusively, hence higher-n electron states are more numerous. Accounting for two states of spin, each n-shell can accommodate up to 2n2 electrons.
In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n, leading to degenerate energy levels for each n > 1. In more complex systems—those having forces other than the nucleus–electron Coulomb force—these levels split. For multielectron atoms this splitting results in "subshells" parametrized by ℓ. Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 (boron) and fails completely on potassium (Z = 19) and afterwards.
The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, ℓ, m, and s specify the complete and unique quantum state of a single electron in a
Document 3:::
In atomic theory and quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as predicted by the particular mathematical form of the orbital.
Each orbital in an atom is characterized by a set of values of the three quantum numbers , , and , which respectively correspond to the electron's energy, its angular momentum, and an angular momentum vector component (magnetic quantum number). As an alternative to the magnetic quantum number, the orbitals are often labeled by the associated harmonic polynomials (e.g., xy, ). Each such orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with the value of , are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between the letters "i" and "j".
Atomic orbitals are the basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of the periodic ta
Document 4:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the number that describes an electron's orbital orientation in space?
A. magnetic electron number
B. relative quantum number
C. imaging quantum number
D. magnetic quantum number
Answer:
|
|
ai2_arc-27
|
multiple_choice
|
Michael learned that the movement of Earth in the solar system causes changes that can be seen on the planet. Which change could be seen on Earth in the time it takes Earth to rotate once on its axis?
|
[
"day becoming night",
"winter changing to spring",
"January changing to February",
"a new moon becoming a full moon"
] |
A
|
Relavent Documents:
Document 0:::
Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise.
The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica.
Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE.
Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record. Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures.
This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around t
Document 1:::
In astronomy, axial precession is a gravity-induced, slow, and continuous change in the orientation of an astronomical body's rotational axis. In the absence of precession, the astronomical body's orbit would show axial parallelism. In particular, axial precession can refer to the gradual shift in the orientation of Earth's axis of rotation in a cycle of approximately 26,000 years. This is similar to the precession of a spinning top, with the axis tracing out a pair of cones joined at their apices. The term "precession" typically refers only to this largest part of the motion; other changes in the alignment of Earth's axis—nutation and polar motion—are much smaller in magnitude.
Earth's precession was historically called the precession of the equinoxes, because the equinoxes moved westward along the ecliptic relative to the fixed stars, opposite to the yearly motion of the Sun along the ecliptic. Historically,
the discovery of the precession of the equinoxes is usually attributed in the West to the 2nd-century-BC astronomer Hipparchus. With improvements in the ability to calculate the gravitational force between planets during the first half of the nineteenth century, it was recognized that the ecliptic itself moved slightly, which was named planetary precession, as early as 1863, while the dominant component was named lunisolar precession. Their combination was named general precession, instead of precession of the equinoxes.
Lunisolar precession is caused by the gravitational forces of the Moon and Sun on Earth's equatorial bulge, causing Earth's axis to move with respect to inertial space. Planetary precession (an advance) is due to the small angle between the gravitational force of the other planets on Earth and its orbital plane (the ecliptic), causing the plane of the ecliptic to shift slightly relative to inertial space. Lunisolar precession is about 500 times greater than planetary precession. In addition to the Moon and Sun, the other planets also cause
Document 2:::
Solar rotation varies with latitude. The Sun is not a solid body, but is composed of a gaseous plasma. Different latitudes rotate at different periods. The source of this differential rotation is an area of current research in solar astronomy. The rate of surface rotation is observed to be the fastest at the equator (latitude ) and to decrease as latitude increases. The solar rotation period is 24.47 days at the equator and almost 38 days at the poles. The average rotation is 28 days.
Current Carrington Rotation: CR []
Surface rotation as an equation
The differential rotation rate is usually described by the equation:
where is the angular velocity in degrees per day, is the solar latitude, A is angular velocity at the equator, and B, C are constants controlling the decrease in velocity with increasing latitude. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied. A current set of accepted average values is:
A= 14.713 ± 0.0491 °/day
B= −2.396 ± 0.188 °/day
C= −1.787 ± 0.253 °/day
Sidereal rotation
At the equator, the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth (the earth's orbital rotation is in the same direction as the sun's rotation). The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days or a sidereal period of 25.38 days. This chosen period roughly corresponds to the prograde rotation at a latitude of 26° north or south, which is consistent with the typical latitude of sunspot
Document 3:::
The length of the day (LOD), which has increased over the long term of Earth's history due to tidal effects, is also subject to fluctuations on a shorter scale of time. Exact measurements of time by atomic clocks and satellite laser ranging have revealed that the LOD is subject to a number of different changes. These subtle variations have periods that range from a few weeks to a few years. They are attributed to interactions between the dynamic atmosphere and Earth itself. The International Earth Rotation and Reference Systems Service monitors the changes.
In the absence of external torques, the total angular momentum of Earth as a whole system must be constant. Internal torques are due to relative movements and mass redistribution of Earth's core, mantle, crust, oceans, atmosphere, and cryosphere. In order to keep the total angular momentum constant, a change of the angular momentum in one region must necessarily be balanced by angular momentum changes in the other regions.
Crustal movements (such as continental drift) or polar cap melting are slow secular events. The characteristic coupling time between core and mantle has been estimated to be on the order of ten years, and the so-called 'decade fluctuations' of Earth's rotation rate are thought to result from fluctuations within the core, transferred to the mantle. The length of day (LOD) varies significantly even for time scales from a few years down to weeks (Figure), and the observed fluctuations in the LOD - after eliminating the effects of external torques - are a direct consequence of the action of internal torques. These short term fluctuations are very probably generated by the interaction between the solid Earth and the atmosphere.
The length of day of other planets also varies, particularly of the planet Venus, which has such a dynamic and strong atmosphere that its length of day fluctuates by up to 20 minutes.
Observations
Any change of the axial component of the atmospheric angular momentum (A
Document 4:::
In geodesy and astrometry, earth orientation parameters (EOP) describe irregularities in the rotation of planet Earth.
EOP provide the rotational transform from the International Terrestrial Reference System (ITRS) to the International Celestial Reference System (ICRS), or vice versa, as a function of time.
Earth's rotational velocity is not constant over time. Any motion of mass in or on Earth causes a slowdown or speedup of the rotation speed, or a change of rotation axis. Small motions produce changes too small to be measured, but movements of very large mass, like sea currents, tides, or those resulting from earthquakes, can produce discernible changes in the rotation and can change very precise astronomical observations. Global simulations of atmosphere, ocean, and land dynamics are used to create effective angular momentum (EAM) functions that can be used to predict changes in EOP.
Components
Universal time
Universal time (UT1) tracks the Earth's rotation in time, which performs one revolution in about 24 hours. The Earth's rotation is uneven, so UT is not linear with respect to atomic time. It is practically proportional to the sidereal time, which is also a direct measure of Earth rotation. The excess revolution time is called length of day (LOD). The absolute value of UT1 can be determined using space geodetic observations, such as Very Long Baseline Interferometry and Lunar laser ranging, whereas LOD can be derived from satellite observations, such as GPS, GLONASS, Galileo and Satellite laser ranging to geodetic satellites. LOD is changing due to gravitational effects from external bodies and geophysical processes occurring in different Earth layers. Then, the LOD prediction is extremely difficult due to extreme events such as El Niño which demonstrated themselves in the LOD signals.
Coordinates of the pole
Due to the very slow pole motion of the Earth, the Celestial Ephemeris Pole (CEP, or celestial pole) does not stay still on the surface of the Eart
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Michael learned that the movement of Earth in the solar system causes changes that can be seen on the planet. Which change could be seen on Earth in the time it takes Earth to rotate once on its axis?
A. day becoming night
B. winter changing to spring
C. January changing to February
D. a new moon becoming a full moon
Answer:
|
|
ai2_arc-365
|
multiple_choice
|
Which of the following best describes a mineral?
|
[
"the main nutrient in all foods",
"a type of grain found in cereals",
"a natural substance that makes up rocks",
"the decomposed plant matter found in soil"
] |
C
|
Relavent Documents:
Document 0:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
Document 1:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 2:::
See also
List of minerals
Document 3:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
Document 4:::
Automated mineralogy is a generic term describing a range of analytical solutions, areas of commercial enterprise, and a growing field of scientific research and engineering applications involving largely automated and quantitative analysis of minerals, rocks and man-made materials.
Technology
Automated mineralogy analytical solutions are characterised by integrating largely automated measurement techniques based on Scanning Electron Microscopy (SEM) and Energy-dispersive X-ray spectroscopy (EDS). Commercially available lab-based solutions include QEMSCAN and Mineral Liberation Analyzer (MLA) from FEI Company, Mineralogic from Zeiss, AZtecMineral from Oxford Instruments, the TIMA (Tescan integrated mineral analyzer) from TESCAN, AMICS from Bruker, and MaipSCAN from Rock Scientific. The first oil & gas wellsite solution was launched jointly by Zeiss and CGG Veritas in 2011 called RoqSCAN. This was followed approximately 6 months later by the release of QEMSCAN Wellsite by FEI Company. More recently in 2016, a ruggedized mine site solution for mining and mineral processing was launched by Zeiss called MinSCAN.
Business
The business of automated mineralogy is concerned with the commercialisation of the technology and software in terms of development and marketing of integrated solutions. This includes all aspects of: service; maintenance; customer support; R&D; marketing and sales. Customers of automated mineralogy solutions include: laboratory facilities; mine sites, well sites, and research institutions.
Applications
Automated mineralogy solutions are applied in a variety of fields requiring statistically reliable, quantitative mineralogical information. These include the following sectors: mining; O&G; coal; environmental sciences; forensic geosciences; archaeology;agribusiness; built environment and planetary geology.
History of the use of the term
The first recorded use of the term automated mineralogy in technical journals can be traced back to seminal pape
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following best describes a mineral?
A. the main nutrient in all foods
B. a type of grain found in cereals
C. a natural substance that makes up rocks
D. the decomposed plant matter found in soil
Answer:
|
|
sciq-6329
|
multiple_choice
|
Molecular geometry is the three-dimensional arrangement of atoms in a what?
|
[
"molecule",
"nucleus",
"DNA",
"genes"
] |
A
|
Relavent Documents:
Document 0:::
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties.
Determination
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances,
dihedral angles,
angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions
Document 1:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 2:::
Atomic spacing refers to the distance between the nuclei of atoms in a material. This space is extremely large compared to the size of the atomic nucleus, and is related to the chemical bonds which bind atoms together. In solid materials, the atomic spacing is described by the bond lengths of its atoms. In ordered solids, the atomic spacing between two bonded atoms is generally around a few ångströms (Å), which is on the order of 10−10 meters. However, in very low density gases (for example, in outer space) the average distance between atoms can be as large as a meter. In this case, the atomic spacing isn't referring to bond length.
The atomic spacing of crystalline structures is usually determined by passing an electromagnetic wave of known frequency through the material, and using the laws of diffraction to determine its atomic spacing. The atomic spacing of amorphous materials (such as glass) varies substantially between different pairs of atoms, therefore diffraction cannot be used to accurately determine atomic spacing. In this case, the average bond length is a common way of expressing the distance between its atoms.
Example
Bond length can be determined between different elements in molecules by using the atomic radii of the atoms. Carbon bonds with itself to form two covalent network solids. Diamond's C-C bond has a distance of Sqrt[3]a/4 ≈ 0.154 nm away from each carbon since adiamond ≈ 0.357 nm, while graphite's C-C bond has a distance of a/Sqrt[3] ≈ 0.142 nm away from each carbon since agraphite ≈ 0.246 nm. Although both bonds are between the same pair of elements they can have different bond lengths.
Document 3:::
The linear molecular geometry describes the geometry around a central atom bonded to two other atoms (or ligands) placed at a bond angle of 180°. Linear organic molecules, such as acetylene (), are often described by invoking sp orbital hybridization for their carbon centers.
According to the VSEPR model (Valence Shell Electron Pair Repulsion model), linear geometry occurs at central atoms with two bonded atoms and zero or three lone pairs ( or ) in the AXE notation. Neutral molecules with linear geometry include beryllium fluoride () with two single bonds, carbon dioxide () with two double bonds, hydrogen cyanide () with one single and one triple bond. The most important linear molecule with more than three atoms is acetylene (), in which each of its carbon atoms is considered to be a central atom with a single bond to one hydrogen and a triple bond to the other carbon atom. Linear anions include azide () and thiocyanate (), and a linear cation is the nitronium ion ().
Linear geometry also occurs in molecules, such as xenon difluoride () and the triiodide ion () with one iodide bonded to the two others. As described by the VSEPR model, the five valence electron pairs on the central atom form a trigonal bipyramid in which the three lone pairs occupy the less crowded equatorial positions and the two bonded atoms occupy the two axial positions at the opposite ends of an axis, forming a linear molecule.
See also
AXE method
Molecular geometry
Document 4:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Molecular geometry is the three-dimensional arrangement of atoms in a what?
A. molecule
B. nucleus
C. DNA
D. genes
Answer:
|
|
sciq-5500
|
multiple_choice
|
The three main rock types are igneous, metamorphic, and what?
|
[
"silicate",
"crystalline",
"basalt",
"sedimentary"
] |
D
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
Document 2:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
Document 3:::
Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline.
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555.
Geology
Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope." It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable.
Perthite is either microcline or orthoclase with thin lamellae of exsolved albite.
Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region.
The largest documented single crystals of microcline were found in Devils Hole Beryl Mine, Colorado, US and measured ~50x36x14 m. This could be one of the largest crystals of any material found so far.
Microcline is commonly used for the manufacturing of porcelain.
As food additive
The chemical compound name is potassium aluminium silicate, and it
Document 4:::
Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate).
Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions.
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep.
This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle.
Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Characteristics
Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure r
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The three main rock types are igneous, metamorphic, and what?
A. silicate
B. crystalline
C. basalt
D. sedimentary
Answer:
|
|
sciq-6099
|
multiple_choice
|
What are are organisms that feed on small pieces of organic matter?
|
[
"difference feeders",
"bottom feeders",
"waste feeders",
"deposit feeders"
] |
D
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 2:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 3:::
Microfauna (Ancient Greek mikros "small" + Neo-Latin fauna "animal") refers to microscopic animals and organisms that exhibit animal-like qualities. Microfauna are represented in the animal kingdom (e.g. nematodes, small arthropods) and the protist kingdom (i.e. protozoans).
Habitat
Microfauna are present in every habitat on Earth. They fill essential roles as decomposers and food sources for lower trophic levels, and are necessary to drive processes within larger organisms.
Role
One particular example of the role of microfauna can be seen in soil, where they are important in the cycling of nutrients in ecosystems. Soil microfauna are capable of digesting just about any organic substance, and some inorganic substances. These organisms are often essential links in the food chain between primary producers and larger species. For example, zooplankton are widespread microscopic animals and protists which feed on algae and detritus in the ocean, such as foraminifera.
Microfauna also aid in digestion and other processes in larger organisms.
Cryptozoa
The microfauna are the least understood of soil life, due to their small size and great diversity. Many microfauna are members of the so-called cryptozoa, animals that remain undescribed by science. Out of the estimated 10-20 million animal species in the world, only 1.8 million have been given scientific names, and many of the remaining millions are likely microfauna, much of it from the tropics.
Phyla
Notable phyla include:
Microscopic arthropods, including dust mites, spider mites, and some crustaceans such as copepods and certain cladocera.
Tardigrades ("water bears")
Rotifers, which are filter feeders that are usually found in fresh water.
Some nematode species
Many loricifera, including the recently discovered anaerobic species, which spend their entire lives in an anoxic environment.
See also
Fauna
Megafauna
Mesofauna
Document 4:::
A bacterivore is an organism which obtains energy and nutrients primarily or entirely from the consumption of bacteria. The term is most commonly used to describe free-living, heterotrophic, microscopic organisms such as nematodes as well as many species of amoeba and numerous other types of protozoans, but some macroscopic invertebrates are also bacterivores, including sponges, polychaetes, and certain molluscs and arthropods. Many bacterivorous organisms are adapted for generalist predation on any species of bacteria, but not all bacteria are easily digested; the spores of some species, such as Clostridium perfringens, will never be prey because of their cellular attributes.
In microbiology
Bacterivores can sometimes be a problem in microbiology studies. For instance, when scientists seek to assess microorganisms in samples from the environment (such as freshwater), the samples are often contaminated with microscopic bacterivores, which interfere with the growing of bacteria for study. Adding cycloheximide can inhibit the growth of bacterivores without affecting some bacterial species, but it has also been shown to inhibit the growth of some anaerobic prokaryotes.
Examples of bacterivores
Caenorhabditis elegans
Ceriodaphnia quadrangula
Diaphanosoma brachyura
Vorticella
Paramecium
Many species of protozoa
Many benthic meiofauna, e.g. gastrotrichs
Springtails
Many sponges, e.g. Aplysina aerophoba
Many crustaceans
Many polychaetes, e.g. feather duster worms
Some marine molluscs
See also
Microbivory
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are are organisms that feed on small pieces of organic matter?
A. difference feeders
B. bottom feeders
C. waste feeders
D. deposit feeders
Answer:
|
|
sciq-388
|
multiple_choice
|
Water gains and loses what more slowly than does land, affecting seasonal conditions inland and on the coast?
|
[
"humidity",
"heat",
"minerals",
"volume"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Crop coefficients are properties of plants used in predicting evapotranspiration (ET). The most basic crop coefficient, Kc, is simply the ratio of ET observed for the crop studied over that observed for the well calibrated reference crop under the same conditions.
Potential evapotranspiration (PET), is the evaporation and transpiration that potentially could occur if a field of the crop had an ideal unlimited water supply. RET is the reference ET often denoted as ET0.
Even in agricultural crops, where ideal conditions are approximated as much as is practical, plants are not always growing (and therefore transpiring) at their theoretical potential. Plants have growth stages and states of health induced by a variety of environmental conditions.
RET usually represents the PET of the reference crop's most active growth. Kc then becomes a function or series of values specific to the crop of interest through its growing season. These can be quite elaborate in the case of certain maize varieties, but tend to use a trapezoidal or leaf area index (LAI) curve for common crop or vegetation canopies.
Stress coefficients, Ks, account for diminished ET due to specific stress factors. These are often assumed to combine by multiplication.
Water stress is the most ubiquitous stress factor, often denoted as Kw. Stress coefficients tend to be functions ranging between 0 and 1. The simplest are linear, but thresholds are appropriate for some toxicity responses. Crop coefficients can exceed 1 when the crop evapotranspiration exceeds that of RET.
Document 4:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Water gains and loses what more slowly than does land, affecting seasonal conditions inland and on the coast?
A. humidity
B. heat
C. minerals
D. volume
Answer:
|
|
sciq-4229
|
multiple_choice
|
How many stars are in our solar system?
|
[
"one",
"none",
"two",
"three"
] |
A
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments.
Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world.
In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below).
The National Aeronautics and
Document 2:::
The Catalog of Nearby Habitable Systems (HabCat) is a catalogue of star systems which conceivably have habitable planets. The list was developed by scientists Jill Tarter and Margaret Turnbull under the auspices of Project Phoenix, a part of SETI.
The list was based upon the Hipparcos Catalogue (which has 118,218 stars) by filtering on a wide range of star system features. The current list contains 17,129 "HabStars".
External links
Target Selection for SETI: 1. A Catalog of Nearby Habitable Stellar Systems, Turnbull, Tarter, submitted 31 Oct 2002 (last accessed 19 Jan 2010)
Target selection for SETI. II. Tycho-2 dwarfs, old open clusters, and the nearest 100 stars, by Turnbull and Tarter, (last accessed 19 Jan 2010)
HabStars - an article on the NASA website
Astronomical catalogues of stars
Search for extraterrestrial intelligence
Exoplanet catalogues
Document 3:::
The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site.
The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration.
The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors.
History
The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day.
The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015.
In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work.
Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes.
By the start of 2017, there were more than 600 people working at the site.
In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many stars are in our solar system?
A. one
B. none
C. two
D. three
Answer:
|
|
sciq-3868
|
multiple_choice
|
What manages the material and energy resources of the cell?
|
[
"cell wall",
"metabolism",
"respiration",
"nucleus"
] |
B
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 2:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What manages the material and energy resources of the cell?
A. cell wall
B. metabolism
C. respiration
D. nucleus
Answer:
|
|
sciq-7551
|
multiple_choice
|
What distant and extraordinarily energetic objects now seem to be early stages of galactic evolution with a supermassive black-hole-devouring material?
|
[
"quasars",
"neutrinos",
"stars",
"pulsars"
] |
A
|
Relavent Documents:
Document 0:::
Types
Quasar
Supermassive black hole
Hypercompact stellar system (hypothetical object organized around a supermassive black hole)
Intermediate-mass black holes and candidates
Cigar Galaxy (Messier 82, NGC 3034)
GCIRS 13E
HLX-1
M82 X-1
Messier 15 (NGC 7078)
Messier 110 (NGC 205)
Sculptor Galaxy (NGC 253)
Triangulum Galaxy (Messier 33, NGC 598
Document 1:::
An intermediate-mass black hole (IMBH) is a class of black hole with mass in the range 102–105 solar masses: significantly more than stellar black holes but less than the 105–109 solar mass supermassive black holes. Several IMBH candidate objects have been discovered in the Milky Way galaxy and others nearby, based on indirect gas cloud velocity and accretion disk spectra observations of various evidentiary strength.
Observational evidence
The gravitational wave signal GW190521, which occurred on 21 May 2019 at 03:02:29 UTC, and was published on 2 September 2020, resulted from the merger of two black holes, weighing 85 and 65 solar masses, with the resulting black hole weighing 142 solar masses, and 8 solar masses being radiated away as gravitational waves.
Before that, the strongest evidence for IMBHs comes from a few low-luminosity active galactic nuclei. Due to their activity, these galaxies almost certainly contain accreting black holes, and in some cases the black hole masses can be estimated using the technique of reverberation mapping. For instance, the spiral galaxy NGC 4395 at a distance of about 4 Mpc appears to contain a black hole with mass of about solar masses.
The largest up-to-date sample of intermediate-mass black holes includes 305 candidates selected by sophisticated analysis of one million optical spectra of galaxies collected by the Sloan Digital Sky Survey. X-ray emission was detected from 10 of these candidates confirming their classification as IMBH.
Some ultraluminous X-ray sources (ULXs) in nearby galaxies are suspected to be IMBHs, with masses of a hundred to a thousand solar masses. The ULXs are observed in star-forming regions (e.g., in starburst galaxy M82), and are seemingly associated with young star clusters which are also observed in these regions. However, only a dynamical mass measurement from the analysis of the optical spectrum of the companion star can unveil the presence of an IMBH as the compact accretor of the ULX.
A
Document 2:::
A brightest cluster galaxy (BCG) is defined as the brightest galaxy in a cluster of galaxies. BCGs include the most massive galaxies in the universe. They are generally elliptical galaxies which lie close to the geometric and kinematical center of their host galaxy cluster, hence at the bottom of the cluster potential well. They are also generally coincident with the peak of the cluster X-ray emission.
Formation scenarios for BCGs include:
Cooling flow—Star formation from the central cooling flow in high density cooling centers of X-ray cluster halos.
The study of accretion populations in BCGs has cast doubt over this theory and astronomers have seen no evidence of cooling flows in radiative cooling clusters. The two remaining theories exhibit healthier prospects.
Galactic cannibalism—Galaxies sink to the center of the cluster due to dynamical friction and tidal stripping.
Galactic merger—Rapid galactic mergers between several galaxies take place during cluster collapse.
It is possible to differentiate the cannibalism model from the merging model by considering the formation period of the BCGs. In the cannibalism model, there are numerous small galaxies present in the evolved cluster, whereas in the merging model, a hierarchical cosmological model is expected due to the collapse of clusters. It has been shown that the orbit decay of cluster galaxies is not effective enough to account for the growth of BCGs.
The merging model is now generally accepted as the most likely one, but recent observations are at odds with some of its predictions. For example, it has been found that the stellar mass of BCGs was assembled much earlier than the merging model predicts.
BCGs are divided into various classes of galaxies: giant ellipticals (gE), D galaxies and cD galaxies. cD and D galaxies both exhibit an extended diffuse envelope surrounding an elliptical-like nucleus akin to regular elliptical galaxies. The light profiles of BCGs are often described by a Sersic surface
Document 3:::
The Morphs collaboration was a coordinated study to determine the morphologies of galaxies in distant clusters and to investigate the evolution of galaxies as a function of environment and epoch. Eleven clusters were examined and a detailed ground-based and space-based study was carried out.
The project was begun in 1997 based upon the earlier observations by two groups using data from images derived from the pre-refurbished Hubble Space Telescope. It was a collaboration of Alan Dressler and Augustus Oemler, Jr., at Observatory of the Carnegie Institute of Washington, Warrick J. Couch at the University of New South Wales, Richard Ellis at Caltech, Bianca Poggianti at the University of Padua, Amy Barger at the University of Hawaii's Institute for Astronomy, Harvey Butcher at ASTRON, and Ray M. Sharples and Ian Smail at Durham University. Results were published through 2000.
The collaboration sought answers to the differences in the origins of the various galaxy types — elliptical, lenticular, and spiral. The studies found that elliptical galaxies were the oldest and formed from the violent merger of other galaxies about two to three billion years after the Big Bang. Star formation in elliptical galaxies ceased about that time. On the other hand, new stars are still forming in the spiral arms of spiral galaxies. Lenticular galaxies (SO) are intermediate between the first two. They contain structures similar to spiral arms, but devoid of the gas and new stars of the spiral galaxies. Lenticular galaxies are the prevalent form in rich galaxy clusters, which suggests that spirals may be transformed into lenticular galaxies as time progresses. The exact process may be related to high galactic density, or to the total mass in a rich cluster's central core. The Morphs collaboration found that one of the principal mechanisms of this transformation involves the interaction among spiral galaxies, as they fall toward the core of the cluster.
The Inamori Magellan Areal Camer
Document 4:::
The Sołtan argument is an astrophysical theory outlined in 1982 by Polish astronomer . It maintains that if quasars were powered by accretion onto a supermassive black hole, then such supermassive black holes must exist in our local universe as "dead" quasars.
History
As early as 1969, Donald Lynden-Bell wrote a paper suggesting that "dead quasars" were found at the center of the Milky Way and nearby galaxies by arguing that given the quasar-number counts, luminosities, distances, and the efficiency of accretion into a "Schwarzschild throat" through the last stable circular orbit (note that the term black hole had been coined only two years earlier and was still gaining popular usage), roughly 1010 quasars existed in the observable universe. This number density of "dead quasars" was attributed by Lynden-Bell to high mass-to-light ratio objects found at the center of galaxies. This is essentially the Sołtan argument, though the direct connection between black hole masses and quasar luminosity functions is missing. In the paper, Lynden-Bell also suggests some radical ideas that are now fully integrated into modern understanding of astrophysics including the model that accretion disks are supported by magnetic fields, that extragalactic cosmic rays are accelerated in them, and he estimates to within an order of magnitude the masses of several of the closest supermassive black holes including the ones in the Milky Way, M31, M32, M81, M82, M87, and NGC 4151.
Thirteen years later, Sołtan explicitly showed that the luminosity () of quasars was due to the accretion rate of mass onto black holes given by:
where
is the efficiency factor
is the time rate of mass falling into the black hole
is the speed of light
Given the number of observed quasars at various redshifts, he was able to derive an integrated energy density due to quasar output. Since observers on Earth are flux limited, there are always more quasars that exist than are observed and thus the energy density
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What distant and extraordinarily energetic objects now seem to be early stages of galactic evolution with a supermassive black-hole-devouring material?
A. quasars
B. neutrinos
C. stars
D. pulsars
Answer:
|
|
sciq-5702
|
multiple_choice
|
A hot-water heating system uses what type of energy to heat water?
|
[
"atmospheric energy",
"negative energy",
"thermal energy",
"potential energy"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 4:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A hot-water heating system uses what type of energy to heat water?
A. atmospheric energy
B. negative energy
C. thermal energy
D. potential energy
Answer:
|
|
sciq-10328
|
multiple_choice
|
Fusion is another nuclear process that can be used to produce energy. in this process, smaller nuclei are combined to make larger nuclei, with an accompanying release of this?
|
[
"cells",
"mineral",
"energy",
"food"
] |
C
|
Relavent Documents:
Document 0:::
Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes.
The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in non-fissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. As the fission fuel is not fissile, there is no self-sustaining chain reaction from fission. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.
In general terms, the hybrid is similar in concept to the fast breeder reactor, which uses a compact high-energy fission core in place of the hybrid's fusion core. Another similar concept is the accelerator-driven subcritical reactor, which uses a particle accelerator to provide the neutrons instead of nuclear reactions.
History
The concept dates to the 1950s, and was strongly advocated by Hans Bethe during the 1970s. At that time the first powerful fusion experiments were being built, but it would still be many years before they could be economically competitive. Hybrids were proposed as a way of greatly accelerating their market introduction, producing energy even before the fusion systems reached break-even. However, detailed studies of the economics of the systems suggested they could not compete with existing fission reactors.
The idea was abandoned and lay dormant until the 2000s, when the continued delays in reaching break-even led to a brief revival around 2009. These studies generally concentrated on the nuclear waste disposal aspects of the design, as opposed to the production of energy. The concept has seen cyclical interest since then, based largely on the success or failure of more conventional solutions like the Yucca Mountain nuclear waste repository
Another
Document 1:::
The iron peak is a local maximum in the vicinity of Fe (Cr, Mn, Fe, Co and Ni) on the graph of the abundances of the chemical elements.
For elements lighter than iron on the periodic table, nuclear fusion releases energy. For iron, and for all of the heavier elements, nuclear fusion consumes energy. Chemical elements up to the iron peak are produced in ordinary stellar nucleosynthesis, with the alpha elements being particularly abundant. Some heavier elements are produced by less efficient processes such as the r-process and s-process. Elements with atomic numbers close to iron are produced in large quantities in supernova due to explosive oxygen and silicon fusion, followed by radioactive decay of nuclei such as Nickel-56. On average, heavier elements are less abundant in the universe, but some of those near iron are comparatively more abundant than would be expected from this trend.
Binding energy
A graph of the nuclear binding energy per nucleon for all the elements shows a sharp increase to a peak near nickel and then a slow decrease to heavier elements. Increasing values of binding energy represent energy released when a collection of nuclei is rearranged into another collection for which the sum of nuclear binding energies is higher. Light elements such as hydrogen release large amounts of energy (a big increase in binding energy) when combined to form heavier nuclei. Conversely, heavy elements such as uranium release energy when converted to lighter nuclei through alpha decay and nuclear fission. is the most thermodynamically favorable in the cores of high-mass stars. Although iron-58 and nickel-62 have even higher (per nucleon) binding energy, their synthesis cannot be achieved in large quantities, because the required number of neutrons is typically not available in the stellar nuclear material, and they cannot be produced in the alpha process (their mass numbers are not multiples of 4).
See also
Abundances of the elements (data page)
Document 2:::
Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus.
Atomic energy includes:
Nuclear binding energy, the energy required to split a nucleus of an atom.
Nuclear potential energy, the potential energy of the particles inside an atomic nucleus.
Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion.
Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles.
The energy of inter-atomic or chemical bonds, which holds atoms together in compounds.
Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb.
Document 3:::
Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means.
The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed.
The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products).
These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen.
Introduction
Nucl
Document 4:::
A fusion torch is a technique for utilizing the high-temperature plasma of a fusion reactor to break apart other materials (especially waste materials) and convert them into a few reusable and saleable elements. It was invented in 1968 by Bernard J. Eastlund and William C. Gough while they were program managers of the controlled thermonuclear research program of the United States Atomic Energy Commission (AEC). The basic concept was to impinge the plasma leaking from fusion reactors onto solids or liquids, vaporizing, dissociating and ionizing the materials, then separating the resulting elements into separate bins for collection. Other applications of fusion plasmas such as generation of UV and optical light, and generation of hydrogen fuel, were also described in their associated 1969 paper.
How it works
The process began with a tokamak, a doughnut-shaped magnetic "bottle", containing plasma and unwanted material. This combination would result in a pool of electrons and nuclei which in turn would cause the tokamak to overflow and transfer the plasma into an outlet. This plasma then passes through a series of metal plates, differing in particular temperatures, all arranged in descending order. The atoms of elements pass over the plates with boiling points above their own. Eventually, the atoms encounter plates where the temperature is lower than their boiling point. This makes them stick onto the plate. The plates then work as a distillation system which sorts the plasma into its constituent elements. These pure elements can then be reused.
1969 paper
In the paper "The Fusion Torch – Closing the Cycle from Use to Reuse", Bernard J. Eastlund and William C. Gough defined population (food), entropy (resources, energy, pollution), and war (human needs and behavior) as three traps that could hamper the advancement of mankind.
In terms of energy needs they estimated that by the year 2000 they would need 140,000 megawatts of electrical capacity. They also speculated
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Fusion is another nuclear process that can be used to produce energy. in this process, smaller nuclei are combined to make larger nuclei, with an accompanying release of this?
A. cells
B. mineral
C. energy
D. food
Answer:
|
|
ai2_arc-730
|
multiple_choice
|
Which action is most likely a learned behavior?
|
[
"A bird builds a nest.",
"A spider spins a web.",
"A lion cub practices its hunting skills.",
"An earthworm moves away from bright light."
] |
C
|
Relavent Documents:
Document 0:::
Instinct is the inherent inclination of a living organism towards a particular complex behaviour, containing innate (inborn) elements. The simplest example of an instinctive behaviour is a fixed action pattern (FAP), in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus.
Any behaviour is instinctive if it is performed without being based upon prior experience (that is, in the absence of learning), and is therefore an expression of innate biological factors. Sea turtles, newly hatched on a beach, will instinctively move toward the ocean. A marsupial climbs into its mother's pouch upon being born. Other examples include animal fighting, animal courtship behaviour, internal escape functions, and the building of nests. Though an instinct is defined by its invariant innate characteristics, details of its performance can be changed by experience; for example, a dog can improve its listening skills by practice.
Instincts are inborn complex patterns of behaviour that exist in most members of the species, and should be distinguished from reflexes, which are simple responses of an organism to a specific stimulus, such as the contraction of the pupil in response to bright light or the spasmodic movement of the lower leg when the knee is tapped. The absence of volitional capacity must not be confused with an inability to modify fixed action patterns. For example, people may be able to modify a stimulated fixed action pattern by consciously recognizing the point of its activation and simply stop doing it, whereas animals without a sufficiently strong volitional capacity may not be able to disengage from their fixed action patterns, once activated.
Instinctual behaviour in humans has been studied.
Early theorists
Jean Henri Fabre
Jean Henri Fabre (1823–1915) is said to be the first person to study small animals (that weren't birds) and insects, and he specifically specialized i
Document 1:::
Social learning refers to learning that is facilitated by observation of, or interaction with, another animal or its products. Social learning has been observed in a variety of animal taxa, such as insects, fish, birds, reptiles, amphibians and mammals (including primates).
Social learning is fundamentally different from individual learning, or asocial learning, which involves learning the appropriate responses to an environment through experience and trial and error. Though asocial learning may result in the acquisition of reliable information, it is often costly for the individual to obtain. Therefore, individuals that are able to capitalize on other individuals' self-acquired information may experience a fitness benefit. However, because social learning relies on the actions of others rather than direct contact, it can be unreliable. This is especially true in variable environments, where appropriate behaviors may change frequently. Consequently, social learning is most beneficial in stable environments, in which predators, food, and other stimuli are not likely to change rapidly.
When social learning is actively facilitated by an experienced individual, it is classified as teaching. Mechanisms of inadvertent social learning relate primarily to psychological processes in the observer, whereas teaching processes relate specifically to activities of the demonstrator. Studying the mechanisms of information transmission allows researchers to better understand how animals make decisions by observing others' behaviors and obtaining information.
Social learning mechanisms
Social learning occurs when one individual influences the learning of another through various processes. In local enhancement and opportunity providing, the attention of an individual is drawn to a specific location or situation. In stimulus enhancement, emulation, observational conditioning, the observer learns the relationship between a stimulus and a result but does not directly copy the behavio
Document 2:::
Structures built by non-human animals, often called animal architecture, are common in many species. Examples of animal structures include termite mounds, ant hills, wasp and beehives, burrow complexes, beaver dams, elaborate nests of birds, and webs of spiders.
Often, these structures incorporate sophisticated features such as temperature regulation, traps, bait, ventilation, special-purpose chambers and many other features. They may be created by individuals or complex societies of social animals with different forms carrying out specialized roles. These constructions may arise from complex building behaviour of animals such as in the case of night-time nests for chimpanzees, from inbuilt neural responses, which feature prominently in the construction of bird songs, or triggered by hormone release as in the case of domestic sows, or as emergent properties from simple instinctive responses and interactions, as exhibited by termites, or combinations of these. The process of building such structures may involve learning and communication, and in some cases, even aesthetics. Tool use may also be involved in building structures by animals.
Building behaviour is common in many non-human mammals, birds, insects and arachnids. It is also seen in a few species of fish, reptiles, amphibians, molluscs, urochordates, crustaceans, annelids and some other arthropods. It is virtually absent from all the other animal phyla.
Functions
Animals create structures primarily for three reasons:
to create protected habitats, i.e. homes.
to catch prey and for foraging, i.e. traps.
for communication between members of the species (intra-specific communication), i.e. display.
Animals primarily build habitat for protection from extreme temperatures and from predation. Constructed structures raise physical problems which need to be resolved, such as humidity control or ventilation, which increases the complexity of the structure. Over time, through evolution, animals use shelters for ot
Document 3:::
Imitative learning is a type of social learning whereby new behaviors are acquired via imitation. Imitation aids in communication, social interaction, and the ability to modulate one's emotions to account for the emotions of others, and is "essential for healthy sensorimotor development and social functioning". The ability to match one's actions to those observed in others occurs in humans and animals; imitative learning plays an important role in humans in cultural development. Imitative learning is different from observational learning in that it requires a duplication of the behaviour exhibited by the model, whereas observational learning can occur when the learner observes an unwanted behaviour and its subsequent consequences and as a result learns to avoid that behaviour.
Imitative learning in animals
On the most basic level, research performed by A.L. Saggerson, David N. George, and R.C. Honey showed that pigeons were able to learn a basic process that would lead to the delivery of a reward by watching a demonstrator pigeon. A demonstrator pigeon was trained to peck a panel in response to one stimulus (e.g. a red light) and hop on the panel in response to a second stimulus (e.g. a green light). After proficiency in this task was established in the demonstrator pigeon, other learner pigeons were placed in a video-monitored observation chamber. After every second observed trial, these learner pigeons were then individually placed in the demonstrator pigeon's box and presented the same test. The learner pigeons displayed competent performance on the task, and thus it was concluded that the learner pigeons had formed a response-outcome association while observing. However, the researchers noted that an alternative interpretation of these results could be that the learner pigeons had instead acquired outcome-response associations that guided their behavior and that further testing was needed to establish if this was a valid alternative.
A similar study was
Document 4:::
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary.
Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector.
Models
Biology
Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli".
A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny).
Behaviors can be either innate or learned from the environment.
Behavior can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment.
Human behavior
The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior.
Animal behavior
Ethology is the scientifi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which action is most likely a learned behavior?
A. A bird builds a nest.
B. A spider spins a web.
C. A lion cub practices its hunting skills.
D. An earthworm moves away from bright light.
Answer:
|
|
sciq-4409
|
multiple_choice
|
What happens to the volume of a balloon when you add moles of gas to it by blowing up?
|
[
"changes randomly",
"stays the same",
"decreases",
"increases"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens to the volume of a balloon when you add moles of gas to it by blowing up?
A. changes randomly
B. stays the same
C. decreases
D. increases
Answer:
|
|
sciq-2613
|
multiple_choice
|
What kind of hormones are released into the environment for communication between animals of the same species?
|
[
"reactions",
"pheromones",
"peptides",
"hormones"
] |
B
|
Relavent Documents:
Document 0:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 1:::
Pulsatile secretion is a biochemical phenomenon observed in a wide variety of cell and tissue types, in which chemical products are secreted in a regular temporal pattern. The most common cellular products observed to be released in this manner are intercellular signaling molecules such as hormones or neurotransmitters. Examples of hormones that are secreted pulsatilely include insulin, thyrotropin, TRH, gonadotropin-releasing hormone (GnRH) and growth hormone (GH). In the nervous system, pulsatility is observed in oscillatory activity from central pattern generators. In the heart, pacemakers are able to work and secrete in a pulsatile manner. A pulsatile secretion pattern is critical to the function of many hormones in order to maintain the delicate homeostatic balance necessary for essential life processes, such as development and reproduction. Variations of the concentration in a certain frequency can be critical to hormone function, as evidenced by the case of GnRH agonists, which cause functional inhibition of the receptor for GnRH due to profound downregulation in response to constant (tonic) stimulation. Pulsatility may function to sensitize target tissues to the hormone of interest and upregulate receptors, leading to improved responses. This heightened response may have served to improve the animal's fitness in its environment and promote its evolutionary retention.
Pulsatile secretion in its various forms is observed in:
Hypothalamic-pituitary-gonadal axis (HPG) related hormones
Glucocorticoids
Insulin
Growth hormone
Parathyroid hormone
Neuroendocrine Pulsatility
Nervous system control over hormone release is based in the hypothalamus, from which the neurons that populate the pariventricular and arcuate nuclei originate. These neurons project to the median eminence, where they secrete releasing hormones into the hypophysial portal system connecting the hypothalamus with the pituitary gland. There, they dictate endocrine function via the four Hyp
Document 2:::
Heterocrine glands (or composite glands) are the glands which function as both exocrine gland and endocrine gland. These glands exhibit a unique and diverse secretory function encompassing the release of proteins and non-proteinaceous compounds, endocrine and exocrine secretions into both the bloodstream and ducts respectively, thereby bridging the realms of internal and external communication within the body. This duality allows them to serve crucial roles in regulating various physiological processes and maintaining homeostasis. These include the gonads (testes and ovaries), pancreas and salivary glands.
Pancreas releases digestive enzymes into the small intestine via ducts (exocrine) and secretes insulin and glucagon into the bloodstream (endocrine) to regulate blood sugar level. Testes produce sperm, which is released through ducts (exocrine), and they also secrete testosterone into the bloodstream (endocrine). Similarly, ovaries release ova through ducts (exocrine) and produce estrogen and progesterone (endocrine). Salivary glands secrete saliva through ducts to aid in digestion (exocrine) and produce epidermal growth factor and insulin-like growth factor (endocrine).
Anatomy
Heterocrine glands typically have a complex structure that enables them to produce and release different types of secretions. The two primary components of these glands are:
Endocrine component: Heterocrine glands produce hormones, which are chemical messengers that travel through the bloodstream to target organs or tissues. These hormones play a vital role in regulating numerous physiological processes, such as metabolism, growth, and the immune response.
Exocrine component: In addition to their endocrine function, heterocrine glands secrete substances directly into ducts or cavities, which can be released through various body openings. These exocrine secretions can include enzymes, mucus, and other substances that aid in digestion, lubrication, or protection.
Characteristics and Func
Document 3:::
Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning, at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs, environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals.
Document 4:::
A pheromone () is a secreted or excreted chemical factor that triggers a social response in members of the same species. Pheromones are chemicals capable of acting like hormones outside the body of the secreting individual, to affect the behavior of the receiving individuals. There are alarm pheromones, food trail pheromones, sex pheromones, and many others that affect behavior or physiology. Pheromones are used by many organisms, from basic unicellular prokaryotes to complex multicellular eukaryotes. Their use among insects has been particularly well documented. In addition, some vertebrates, plants and ciliates communicate by using pheromones. The ecological functions and evolution of pheromones are a major topic of research in the field of chemical ecology.
Background
The portmanteau word "pheromone" was coined by Peter Karlson and Martin Lüscher in 1959, based on the Greek φέρω phérō ('I carry') and ὁρμων hórmōn ('stimulating'). Pheromones are also sometimes classified as ecto-hormones. They were researched earlier by various scientists, including Jean-Henri Fabre, Joseph A. Lintner, Adolf Butenandt, and ethologist Karl von Frisch who called them various names, like for instance "alarm substances". These chemical messengers are transported outside of the body and affect neurocircuits, including the autonomous nervous system with hormone or cytokine mediated physiological changes, inflammatory signaling, immune system changes and/or behavioral change in the recipient. They proposed the term to describe chemical signals from conspecifics that elicit innate behaviors soon after the German biochemist Adolf Butenandt had characterized the first such chemical, bombykol, a chemically well-characterized pheromone released by the female silkworm to attract mates.
Categorization by function
Aggregation
Aggregation pheromones function in mate choice, overcoming host resistance by mass attack, and defense against predators. A group of individuals at one location is refe
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of hormones are released into the environment for communication between animals of the same species?
A. reactions
B. pheromones
C. peptides
D. hormones
Answer:
|
|
sciq-2458
|
multiple_choice
|
What process involves the flow of heat from warmer objects to cooler objects?
|
[
"conduction",
"activation",
"convection",
"radiation"
] |
A
|
Relavent Documents:
Document 0:::
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat
Document 1:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 2:::
Conduction is the process by which heat is transferred from the hotter end to the colder end of an object. The ability of the object to conduct heat is known as its thermal conductivity, and is denoted .
Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform.
In conduction, the heat flow is within and through the body itself. In contrast, in heat transfer by thermal radiation, the transfer is often between bodies, which may be separated spatially. Heat can also be transferred by a combination of conduction and radiation. In solids, conduction is mediated by the combination of vibrations and collisions of molecules, propagation and collisions of phonons, and diffusion and collisions of free electrons. In gases and liquids, conduction is due to the collisions and diffusion of molecules during their random motion. Photons in this context do not collide with one another, and so heat transport by electromagnetic radiation is conceptually distinct from heat conduction by microscopic diffusion and collisions of material particles and phonons. But the distinction is often not easily observed unless the material is semi-transparent.
In the engineering sciences, heat transfer includes the processes of thermal radiation, convection, and sometimes mass transfer. Usually, more than one of these processes occurs in a given situation.
Overview
On a microscopic scale, conduction occurs within a body considered as being stationary; this means that the kinetic and potential energies of the bulk motion of the body are separately accounted for. Internal energy diffuses as rapidly moving or vibrating atoms and molecule
Document 3:::
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers.
The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy.
Introduction
Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is
where is heat flux vector, is temporal change of internal energy ( is density, is specific heat capacity at constant pressure, is temperature and is time), and is the energy conversion to and from thermal energy ( and are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector is composed of three macroscopic fundamental modes, which are conduction (, : thermal conductivity), convection (, : velocity), and radiation (, : angular frequency, : polar angle, : spectral, directional radiation intensity, : unit vector), i.e., .
Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat
Document 4:::
Thermal engineering is a specialized sub-discipline of mechanical engineering that deals with the movement of heat energy and transfer. The energy can be transferred between two mediums or transformed into other forms of energy. A thermal engineer will have knowledge of thermodynamics and the process to convert generated energy from thermal sources into chemical, mechanical, or electrical energy. Many process plants use a wide variety of machines that utilize components that use heat transfer in some way. Many plants use heat exchangers in their operations. A thermal engineer must allow the proper amount of energy to be transferred for correct use. Too much and the components could fail, too little and the system will not function at all. Thermal engineers must have an understanding of economics and the components that they will be servicing or interacting with. Some components that a thermal engineer could work with include heat exchangers, heat sinks, bi-metals strips, radiators and many more. Some systems that require a thermal engineer include; Boilers, heat pumps, water pumps, engines, and more.
Part of being a thermal engineer is to improve a current system and make it more efficient than the current system. Many industries employ thermal engineers, some main ones are the automotive manufacturing industry, commercial construction, and Heating Ventilation and Cooling industry. Job opportunities for a thermal engineer are very broad and promising.
Thermal engineering may be practiced by mechanical engineers and chemical engineers.
One or more of the following disciplines may be involved in solving a particular thermal engineering problem: Thermodynamics, Fluid mechanics, Heat transfer, or
Mass transfer.
One branch of knowledge used frequently in thermal engineering is that of thermofluids.
Applications
Boiler design
Combustion engines
Cooling systems
Cooling of computer chips
Heat exchangers
HVAC
Process Fired Heaters
Refrigeration Systems
Compressed Air Sy
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What process involves the flow of heat from warmer objects to cooler objects?
A. conduction
B. activation
C. convection
D. radiation
Answer:
|
|
sciq-10416
|
multiple_choice
|
What type of mammals includes opossums, kangaroos, and koalas?
|
[
"monotremes",
"marsupials",
"crustaceans",
"arthropods"
] |
B
|
Relavent Documents:
Document 0:::
In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals.
Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things.
Research purposes
Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures.
Mammalogist
A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep
Document 1:::
Mammals
Alces alces (Linnaeus, 1758) — Eurasian elk, moose
Axis axis (Erxleben, 1777) — chital, axis deer
Bison bison (Linnaeus, 1758) — American bison, buffalo
Capreolus capreolus (Linnaeus, 1758) — European roe deer, roe deer
Caracal caracal (Schreber, 1776) — caracal
Chinchilla chinchilla (Lichtenstein, 1829) — short-tailed chinchilla
Chiropotes chiropotes (Humboldt, 1811) — red-backed bearded saki
Cricetus cricetus (Linnaeus, 1758) — common hamster, European hamster
Crocuta crocuta (Erxleben, 1777) — spotted hyena
Dama dama (Linnaeus, 1758) — European fallow deer
Feroculus feroculus (Kelaart, 1850) — Kelaart's long-clawed shrew
Gazella gazella (Pallas, 1766) — mountain gazelle
Genetta genetta (Linnaeus, 1758) — common genet
Gerbillus gerbillus (Olivier, 1801) — lesser Egyptian gerbil
Giraffa giraffa (von Schreber, 1784) — southern giraffe
Glis glis (Linnaeus, 1766) — European edible dormouse, European fat dormouse
Gorilla gorilla (Savage, 1847) — western gorilla
Gulo gulo (Linnaeus, 1758) — wolverine
Hoolock hoolock (Harlan, 1834) — western hoolock gibbon
Hyaena hyaena (Linnaeus, 1758) — striped hyena
Indri indri (Gmelin, 1788) — indri
Jaculus jaculus (Linnaeus, 1758) — lesser Egyptian jerboa
Lagurus lagurus (Pallas, 1773) — steppe vole, steppe lemming
Lemmus lemmus (Linnaeus, 1758) — Norway lemming
Lutra lutra (Linnaeus, 1758) — European otter
Lynx lynx (Linnaeus, 1758) — Eurasian lynx
Macrophyllum macrophyllum (Schinz, 1821) — long-legged bat
Marmota marmota (Linnaeus, 1758) — Alpine marmot
Martes martes (Linnaeus, 1758) — European pine marten, pine marten
Meles meles (Linnaeus, 1758) — European badg
Document 2:::
The gray short-tailed opossum (Monodelphis domestica) is a small South American member of the family Didelphidae. Unlike most other marsupials, the gray short-tailed opossum does not have a true pouch. The scientific name Monodelphis is derived from Greek and means "single womb" (referring to the lack of a pouch) and the Latin word domestica which means "domestic" (chosen because of the species' habit of entering human dwellings). It was the first marsupial to have its genome sequenced. The gray short-tailed opossum is used as a research model in science, and is also frequently found in the exotic pet trade. It is also known as the Brazilian opossum, rainforest opossum and in a research setting the laboratory opossum.
Description
Gray short-tailed opossums are relatively small animals, with a superficial resemblance to voles. In the wild they have head-body length of and weigh ; males are larger than females. However, individuals kept in captivity are typically much larger, with males weighing up to . As the common name implies, the tail is proportionately shorter than in some other opossum species, ranging from . Their tails are only semi-prehensile, unlike the fully prehensile tail characteristic of the North American opossum.
The fur is greyish brown over almost the entire body, although fading to a paler shade on the underparts, and with near-white fur on the feet. Only the base of the tail has fur, the remainder being almost entirely hairless. The claws are well-developed and curved in shape, and the paws have small pads marked with fine dermal ridges. Unlike many other marsupials, females do not have a pouch. They typically possess thirteen teats, which can be retracted into the body by muscles at their base.
Distribution and habitat
The gray short-tailed opossum is found generally south of the Amazon River, in southern, central, and western Brazil. It is also found in eastern Bolivia, northern Paraguay, and in Formosa Province in northern Argentina. It in
Document 3:::
Order Artiodactyla (even-toed ungulates)
Tylopoda (camelids)
Artiofabula (ruminants, pigs, peccaries, whales, and dolphins)
Suina (pigs and peccaries)
Cetruminantia (ruminants, whales, and dolphins)
Suborder Ruminantia (antelope, buffalo, cattle, goats, sheep, deer, giraffes, and chevrotains)
Family Antilocapridae (pronghorn)
Family Bovidae, 135 species (antelope, bison, buffalo, cattle, goats, and sheep)
Family Cervidae, 55~94 species (deer, elk, and moose)
Family Giraffidae, 2 species (giraffes, okapis)
Family Moschidae, 4~7 species (musk deer)
Family Tragulidae, 6~10 species (chevrotains, or mouse deer)
Suborder Whippomorpha (aquatic or semi-aquatic even-toed ungulates)
Infraorder Acodonta
Family Hippopotamidae, 2 species (hippopotamuses)
Infraorder Cetacea (whales, dolphins, and porpoises)
Mysticeti (baleen whales)
Family Balaenidae, 2~4 species (right whales and bowhead whales)
Family Balaenopteridae, 6~9 species (rorquals)
Family Eschrichtiidae, 1 species (gray whale)
Family Neobalaenidae, 1 species (pygmy right whale)
Odontoceti (toothed whales, dolphins, and porpoises)
Superfamily Delphinoidea (dolphins, arctic whales, porpoises, and relatives)
Family Delphinidae, 38 species (dolphins, killer whales, and relatives)
Family Monodontidae, 2 species (beluga and narwhal)
Family Phocoenidae, 6 species (porpoises)
Superfamily Physeteroidea (sperm whales)
Family Kogiidae, 2 species (pygmy and dwarf sperm whales)
Family Physeteridae, 1 species (common sperm whale)
Superfamily Ziphoidea (beaked whales)
Family Ziphidae, 22 species (modern beaked whales)
Superfamily Platanistoidea (river dolphins)
Family Iniidae, 1~3 species (South American river dolphin(s))
Document 4:::
Miacis ("small point") is an extinct genus of placental mammals from clade Carnivoraformes, that lived in North America from early to middle Eocene.
Description
Miacis was five-clawed, about the size of a weasel (~30 cm), and lived on the North American continent. It retained some primitive characteristics such as low skulls, long slender bodies, long tails, and short legs. Miacis retained 44 teeth, although some reductions in this number were apparently in progress and some of the teeth were reduced in size.
The hind limbs were longer than the forelimbs, the pelvis was dog-like in form and structure, and some specialized traits were present in the vertebrae. It had retractable claws, agile joints for climbing, and binocular vision. Miacis and related forms had brains that were relatively larger than those of the creodonts, and the larger brain size as compared with body size probably reflects an increase in intelligence.
Like many other early carnivoramorphans, it was well suited for an arboreal climbing lifestyle with needle-sharp claws, limbs, and joints resembling modern carnivorans. Miacis was probably a very agile forest dweller that preyed upon smaller animals, such as small mammals, reptiles, and birds, and might also have eaten eggs and fruits.
Classification and phylogeny
Classification
History of taxonomy
Since Edward Drinker Cope first described the genus Miacis in 1872, at least twenty other species have been assigned to Miacis. However, these species share few synapomorphies other than plesiomorphic characteristics of miacids in general. This reflects the fact that Miacis has been treated as a wastebasket taxon and contains a diverse collection of species that belong to the stemgroup within the Carnivoraformes. Many of the species originally assigned to Miacis have since been assigned to other genera and, apart from the type species, Miacis parvivorus, the remaining species are often referred to with Miacis in quotations (e.g. "Miacis" latidens)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of mammals includes opossums, kangaroos, and koalas?
A. monotremes
B. marsupials
C. crustaceans
D. arthropods
Answer:
|
|
sciq-6358
|
multiple_choice
|
What part of the brain regulates certain hormones associated with reproduction during breeding seasons?
|
[
"thalamus",
"frontal lobe",
"hippocampus",
"hypothalamus"
] |
D
|
Relavent Documents:
Document 0:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 1:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
Document 2:::
Neuroecology studies ways in which the structure and function of the brain results from adaptations to a specific habitat and niche.
It integrates the multiple disciplines of neuroscience, which examines the biological basis of cognitive and emotional processes, such as perception, memory, and decision-making, with the field of ecology, which studies the relationship between living organisms and their physical environment.
In biology, the term 'adaptation' signifies the way evolutionary processes enhance an organism's fitness to survive within a specific ecological context. This fitness includes the development of physical, cognitive, and emotional adaptations specifically suited to the environmental conditions in which the organism or phenotype lives, and in which its species or genotype evolves.
Neuroecology concentrates specifically on neurological adaptations, particularly those of the brain. The purview of this study encompasses two areas. Firstly, neuroecology studies how the physical structure and functional activity of neural networks in a phenotype is influenced by characteristics of the environmental context. This includes the way social stressors, interpersonal relationships, and physical conditions precipitate persistent alterations in the individual brain, providing the neural correlates of cognitive and emotional responses. Secondly, neuroecology studies how neural structure and activity common to a genotype is determined by natural selection of traits that benefit survival and reproduction in a specific environment.
See also
Evolutionary ecology
Evolutionary psychology
Document 3:::
Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning, at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs, environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals.
Document 4:::
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
The temporal lobe is involved in processing sensory input into derived meanings for the appropriate retention of visual memory, language comprehension, and emotion association.
Temporal refers to the head's temples.
Structure
The temporal lobe consists of structures that are vital for declarative or long-term memory. Declarative (denotative) or explicit memory is conscious memory divided into semantic memory (facts) and episodic memory (events). Medial temporal lobe structures that are critical for long-term memory include the hippocampus, along with the surrounding hippocampal region consisting of the perirhinal, parahippocampal, and entorhinal neocortical regions. The hippocampus is critical for memory formation, and the surrounding medial temporal cortex is currently theorized to be critical for memory storage. The prefrontal and visual cortices are also involved in explicit memory.
Research has shown that lesions in the hippocampus of monkeys results in limited impairment of function, whereas extensive lesions that include the hippocampus and the medial temporal cortex result in severe impairment.
Function
Visual memories
The temporal lobe communicates with the hippocampus and plays a key role in the formation of explicit long-term memory modulated by the amygdala.
Processing sensory input
Auditory Adjacent areas in the superior, posterior, and lateral parts of the temporal lobes are involved in high-level auditory processing. The temporal lobe is involved in primary auditory perception, such as hearing, and holds the primary auditory cortex. The primary auditory cortex receives sensory information from the ears and secondary areas process the information into meaningful units such as speech and words. The superior temporal gyrus includes an area (wit
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What part of the brain regulates certain hormones associated with reproduction during breeding seasons?
A. thalamus
B. frontal lobe
C. hippocampus
D. hypothalamus
Answer:
|
|
sciq-2444
|
multiple_choice
|
Which is larger: the human sperm or the human egg?
|
[
"zygote",
"human sperm",
"human egg",
"same size"
] |
C
|
Relavent Documents:
Document 0:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 1:::
Sperm (: sperm or sperms) is the male reproductive cell, or gamete, in anisogamous forms of sexual reproduction (forms in which there is a larger, female reproductive cell and a smaller, male one). Animals produce motile sperm with a tail known as a flagellum, which are known as spermatozoa, while some red algae and fungi produce non-motile sperm cells, known as spermatia. Flowering plants contain non-motile sperm inside pollen, while some more basal plants like ferns and some gymnosperms have motile sperm.
Sperm cells form during the process known as spermatogenesis, which in amniotes (reptiles and mammals) takes place in the seminiferous tubules of the testes. This process involves the production of several successive sperm cell precursors, starting with spermatogonia, which differentiate into spermatocytes. The spermatocytes then undergo meiosis, reducing their chromosome number by half, which produces spermatids. The spermatids then mature and, in animals, construct a tail, or flagellum, which gives rise to the mature, motile sperm cell. This whole process occurs constantly and takes around 3 months from start to finish.
Sperm cells cannot divide and have a limited lifespan, but after fusion with egg cells during fertilization, a new organism begins developing, starting as a totipotent zygote. The human sperm cell is haploid, so that its 23 chromosomes can join the 23 chromosomes of the female egg to form a diploid cell with 46 paired chromosomes. In mammals, sperm is stored in the epididymis and is released from the penis during ejaculation in a fluid known as semen.
The word sperm is derived from the Greek word σπέρμα, sperma, meaning "seed".
Evolution
It is generally accepted that isogamy is the ancestor to sperm and eggs. However, there are no fossil records for the evolution of sperm and eggs from isogamy leading there to be a strong emphasis on mathematical models to understand the evolution of sperm.
A widespread hypothesis states that sperm evolve
Document 2:::
Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilization.
A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs.
In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity. The use of "male" in regard to sex and gender has been subject to discussion.
Overview
The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction.
Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another.
Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants.
Evolution
The evolution of ani
Document 3:::
The reproductive system of an organism, also known as the genital system, is the biological system made up of all the anatomical organs involved in sexual reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system. Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring.
Animals
In mammals, the major organs of the reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs, including the gamete-producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted diseases.
Most other vertebrates have similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates.
Vertebrates
Vertebrates share key elements of their reproductive systems. They all have gamete-producing organs known as gonads. In females, these gonads are then connected by oviducts to an opening to the outside of the body, typically the cloaca, but sometimes to a unique pore such as a vagina or intromittent organ.
Humans
The human reproductive system usually involves internal fertilization by sexual intercourse. During this process, the male inserts their erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm then travels through the vagina and cervix into the uterus or fallopian tubes for fertilization of the ovum. Upon successful fertilization and implantation, gestation of the fetus then occurs within the female's uterus for approximately nine months, this process is known as pregnancy in humans. Gestati
Document 4:::
The spermatid is the haploid male gametid that results from division of secondary spermatocytes. As a result of meiosis, each spermatid contains only half of the genetic material present in the original primary spermatocyte.
Spermatids are connected by cytoplasmic material and have superfluous cytoplasmic material around their nuclei.
When formed, early round spermatids must undergo further maturational events to develop into spermatozoa, a process termed spermiogenesis (also termed spermeteliosis).
The spermatids begin to grow a living thread, develop a thickened mid-piece where the mitochondria become localised, and form an acrosome. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive.
In 2016 scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids artificially from stem cells. They injected these spermatids into mouse eggs and produced pups.
DNA repair
As postmeiotic germ cells develop to mature sperm they progressively lose the ability to repair DNA damage that may then accumulate and be transmitted to the zygote and ultimately the embryo. In particular, the repair of DNA double-strand breaks by the non-homologous end joining pathway, although present in round spermatids, appears to be lost as they develop into elongated spermatids.
Additional images
See also
List of distinct cell types in the adult human body
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which is larger: the human sperm or the human egg?
A. zygote
B. human sperm
C. human egg
D. same size
Answer:
|
|
sciq-10319
|
multiple_choice
|
What term is used to express the measurement of how high land is above sea level?
|
[
"latitude",
"elevation",
"distance",
"knots"
] |
B
|
Relavent Documents:
Document 0:::
Height above mean sea level is a measure of the vertical distance (height, elevation or altitude) of a location in reference to a historic mean sea level taken as a vertical datum. In geodesy, it is formalized as orthometric heights.
The quantity is called "metres above mean sea level" in the metric system, while in United States customary and imperial units it would be called "feet above mean sea level".
Mean sea levels are affected by climate change and other factors and change over time. For this and other reasons, recorded measurements of elevation above sea level at a reference time in history might differ from the actual elevation of a given location over sea level at a given moment.
Uses
Metres above sea level is internationally the standard measurement of the elevation or altitude of:
Geographic locations such as towns, mountains and other landmarks.
The top of buildings and other structures.
Flying objects such as airplanes or helicopters in China and below Transition Altitude (or TA) in Russia and many CIS member countries.
Methods of measurement
The elevation or altitude in metres above sea level of a location, object, or point can be determined in a number of ways. The most common include:
Global Navigation Satellite System (like GPS), where a receiver determines a location from pseudoranges to multiple satellites. A geoid is needed to convert the 3D position to sea-level elevation.
Altimeter, that measures atmospheric pressure, which decreases as altitude increases. As atmospheric pressure changes with the weather too, a recent local measure of the pressure at a known altitude is needed to calibrate the altimeter.
Stereoscopy in aerial photography.
Aerial lidar and satellite laser altimetry.
Aerial or satellite radar altimetry.
Surveying, especially levelling.
Accurate measurement of historical mean sea levels is complex. Land mass subsidence (as occurs naturally in some regions) can give the appearance of rising sea levels. Conversely,
Document 1:::
A spot height is an exact point on a map with an elevation recorded beside it that represents its height above a given datum. In the UK this is the Ordnance Datum. Unlike a bench-mark, which is marked by a disc or plate, there is no official indication of a spot height on the ground although, in open country, spot heights may sometimes be marked by cairns. In geoscience, it can be used for showing elevations on a map, alongside contours, bench marks, etc.
See also
Surveying
Benchmark (surveying)
Triangulation station
Document 2:::
The elevation of a geographic location is its height above or below a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic datum § Vertical datum).
The term elevation is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface.
Elevation is not to be confused with the distance from the center of the Earth. Due to the equatorial bulge, the summits of Mount Everest and Chimborazo have, respectively, the largest elevation and the largest geocentric distance.
Aviation
In aviation, the term elevation or aerodrome elevation is defined by the ICAO as the highest point of the landing area. It is often measured in feet and can be found in approach charts of the aerodrome. It is not to be confused with terms such as the altitude or height.
Maps and GIS
GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales. Tools inside the GIS allow for manipulation of data for spatial analysis or cartography.
A topographical map is the main type of map used to depict elevation, often through use of contour lines.
In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS.
USGS (United States Geologic Survey) is developing a 3D Elevation Program (3DEP) to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the c
Document 3:::
Altitude is a distance measurement, usually in the vertical or "up" direction, between a reference datum and a point or object. The exact definition and reference datum varies according to the context (e.g., aviation, geometry, geographical survey, sport, or atmospheric pressure). Although the term altitude is commonly used to mean the height above sea level of a location, in geography the term elevation is often preferred for this usage.
Vertical distance measurements in the "down" direction are commonly referred to as depth.
In aviation
The term altitude can have several meanings, and is always qualified by explicitly adding a modifier (e.g. "true altitude"), or implicitly through the context of the communication. Parties exchanging altitude information must be clear which definition is being used.
Aviation altitude is measured using either mean sea level (MSL) or local ground level (above ground level, or AGL) as the reference datum.
Pressure altitude divided by 100 feet (30 m) is the flight level, and is used above the transition altitude ( in the US, but may be as low as in other jurisdictions). So when the altimeter reads the country-specific flight level on the standard pressure setting the aircraft is said to be at "Flight level XXX/100" (where XXX is the transition altitude). When flying at a flight level, the altimeter is always set to standard pressure (29.92 inHg or 1013.25 hPa).
On the flight deck, the definitive instrument for measuring altitude is the pressure altimeter, which is an aneroid barometer with a front face indicating distance (feet or metres) instead of atmospheric pressure.
There are several types of altitude in aviation:
Indicated altitude is the reading on the altimeter when it is set to the local barometric pressure at mean sea level. In UK aviation radiotelephony usage, the vertical distance of a level, a point or an object considered as a point, measured from mean sea level; this is referred to over the radio as altitude.(
Document 4:::
The orthometric height is the vertical distance H along the plumb line from a point of interest to a reference surface known as the geoid, the vertical datum that approximates mean sea level. Orthometric height is one of the scientific formalizations of a laypersons' "height above sea level", along with other types of heights in Geodesy.
In the US, the current NAVD88 datum is tied to a defined elevation at one point rather than to any location's exact mean sea level. Orthometric heights are usually used in the US for engineering work, although dynamic height may be chosen for large-scale hydrological purposes. Heights for measured points are shown on National Geodetic Survey data sheets, data that was gathered over many decades by precise spirit leveling over thousands of miles.
Alternatives to orthometric height include dynamic height and normal height, and various countries may choose to operate with those definitions instead of orthometric. They may also adopt slightly different but similar definitions for their reference surface.
Since gravity is not constant over large areas the orthometric height of a level surface (equipotential) other than the reference surface is not constant, and orthometric heights need to be corrected for that effect. For example, gravity is 0.1% stronger in the northern United States than in the southern, so a level surface that has an orthometric height of 1000 meters in one place will be 1001 meters high in other places. In fact, dynamic height is the most appropriate height measure when working with the level of water over a large geographic area.
Orthometric heights may be obtained from differential leveling height differences by correcting for gravity variations.
Practical applications must use a model rather than measurements to calculate the change in gravitational potential versus depth in the earth, since the geoid is below most of the land surface (e.g., the Helmert orthometric heights of NAVD88).
GPS measurements give
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term is used to express the measurement of how high land is above sea level?
A. latitude
B. elevation
C. distance
D. knots
Answer:
|
|
sciq-11398
|
multiple_choice
|
What is created by the polymerization of glucose?
|
[
"cellulose",
"glucose",
"carbonate",
"methane"
] |
A
|
Relavent Documents:
Document 0:::
Cellulose fibers () are fibers made with ethers or esters of cellulose, which can be obtained from the bark, wood or leaves of plants, or from other plant-based material. In addition to cellulose, the fibers may also contain hemicellulose and lignin, with different percentages of these components altering the mechanical properties of the fibers.
The main applications of cellulose fibers are in the textile industry, as chemical filters, and as fiber-reinforcement composites, due to their similar properties to engineered fibers, being another option for biocomposites and polymer composites.
History
Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula. Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production of rayon ("artificial silk") from cellulose began in the 1890s, and cellophane was invented in 1912. In 1893, Arthur D. Little of Boston, invented yet another cellulosic product, acetate, and developed it as a film. The first commercial textile uses for acetate in fiber form were developed by the Celanese Company in 1924. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shoda.
Cellulose structure
Cellulose is a polymer made of repeating glucose molecules attached end to end. A cellulose molecule may be from several hundred to over 10,000 glucose units long. Cellulose is similar in form to complex carbohydrates like starch and glycogen. These polysaccharides are also made from multiple subunits of glucose. The difference between cellulose and other complex carbohydrate molecules is how the glucose molecules are linked together. In addition, cellulose is a straight chain polymer, and each cellulose molecule is long and rod-like. This differs from starch
Document 1:::
Glucose syrup, also known as confectioner's glucose, is a syrup made from the hydrolysis of starch. Glucose is a sugar. Maize (corn) is commonly used as the source of the starch in the US, in which case the syrup is called "corn syrup", but glucose syrup is also made from potatoes and wheat, and less often from barley, rice and cassava.p. 21
Glucose syrup containing over 90% glucose is used in industrial fermentation, but syrups used in confectionery contain varying amounts of glucose, maltose and higher oligosaccharides, depending on the grade, and can typically contain 10% to 43% glucose. Glucose syrup is used in foods to sweeten, soften texture and add volume. By converting some glucose in corn syrup into fructose (using an enzymatic process), a sweeter product, high fructose corn syrup can be produced.
Glucose syrup was first made in 1811 in Russia by Gottlieb Kirchhoff using heat and sulfuric acid.
Types
Depending on the method used to hydrolyse the starch and on the extent to which the hydrolysis reaction has been allowed to proceed, different grades of glucose syrup are produced, which have different characteristics and uses. The syrups are broadly categorised according to their dextrose equivalent (DE). The further the hydrolysis process proceeds, the more reducing sugars are produced, and the higher the DE. Depending on the process used, glucose syrups with different compositions, and hence different technical properties, can have the same DE.
Confectioner's syrup
The original glucose syrups were manufactured by acid hydrolysis of corn starch at high temperature and pressure. The typical product had a DE of 42, but quality was variable due to the difficulty of controlling the reaction. Higher DE syrups made by acid hydrolysis tend to have a bitter taste and a dark colour, due to the production of hydroxymethylfurfural and other byproducts.p. 26 This type of product is now manufactured using a continuous converting process and is still widely used du
Document 2:::
Corn syrup is a food syrup which is made from the starch of corn (called maize in many countries) and contains varying amounts of sugars: glucose, maltose and higher oligosaccharides, depending on the grade. Corn syrup is used in foods to soften texture, add volume, prevent crystallization of sugar, and enhance flavor. Corn syrup is not the same as high-fructose corn syrup (HFCS), which is manufactured from corn syrup by converting a large proportion of its glucose into fructose using the enzyme D-xylose isomerase, thus producing a sweeter substance.
The more general term glucose syrup is often used synonymously with corn syrup, since glucose syrup in the United States is most commonly made from corn starch. Technically, glucose syrup is any liquid starch hydrolysate of mono-, di-, and higher-saccharides and can be made from any source of starch: wheat, tapioca and potatoes are the most common other sources.
Commercial preparation
Historically, corn syrup was produced by combining corn starch with dilute hydrochloric acid, and then heating the mixture under pressure. The process was invented by the German chemist Gottlieb Kirchhoff in 1811. Currently, corn syrup is obtained through a multi-step bioprocess. First, the enzyme α-amylase is added to a mixture of corn starch and water. α-amylase is secreted by various species of the bacterium genus Bacillus and the enzyme is isolated from the liquid in which the bacteria were grown. The enzyme breaks down the starch into oligosaccharides, which are then broken into glucose molecules by adding the enzyme glucoamylase, known also as "γ-amylase". Glucoamylase is secreted by various species of the fungus Aspergillus; the enzyme is isolated from the liquid in which the fungus is grown. The glucose can then be transformed into fructose by passing the glucose through a column that is loaded with the enzyme D-xylose isomerase, an enzyme that is isolated from the growth medium of any of several bacteria.
Corn syrup is produce
Document 3:::
Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce.
Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising.
Biochemical process of fermentation of sucrose
The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process.
C6H12O6 → 2 C2H5OH + 2 CO2
Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules.
C12H22O11 + H2O + invertase → 2 C6H12O6
Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation:
C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+
CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis:
1. CH3COCOO− + H+ → CH3CHO + CO2
catalyzed by pyruvate decarboxylase
2. CH3CHO + NADH + H+ → C2H5OH + NAD+
This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast).
Document 4:::
Glycerol dialkyl glycerol tetraether lipids (GDGTs) are a class of membrane lipids synthesized by archaea and some bacteria, making them useful biomarkers for these organisms in the geological record. Their presence, structure, and relative abundances in natural materials can be useful as proxies for temperature, terrestrial organic matter input, and soil pH for past periods in Earth history. Some structural forms of GDGT form the basis for the TEX86 paleothermometer. Isoprenoid GDGTs, now known to be synthesized by many archaeal classes, were first discovered in extremophilic archaea cultures. Branched GDGTs, likely synthesized by acidobacteriota, were first discovered in a natural Dutch peat sample in 2000.
Chemical structure
The two primary structural classes of GDGTs are isoprenoid (isoGDGT) and branched (brGDGT), which refer to differences in the carbon skeleton structures. Isoprenoid compounds are numbered -0 through -8, with the numeral representing the number of cyclopentane rings present within the carbon skeleton structure. The exception is crenarchaeol, a Nitrososphaerota product with one cyclohexane ring moiety in addition to four cyclopentane rings. Branched GDGTs have zero, one, or two cyclopentane moieties and are further classified based the positioning of their branches. They are numbered with roman numerals and letters, with -I indicating structures with four modifications (i.e. either a branch or a cyclopentane moiety), -II indicating structures with five modifications, and -III indicating structures with six modifications. The suffix a after the roman numeral means one of its modifications is a cyclopentane moiety; b means two modifications are cyclopentane moieties. For example, GDGT-IIb is a compound with three branches and two cyclopentane moieties (a total of five modifications). GDGTs form as monolayers and with ether bonds to glycerol, as opposed to as bilayers and with ester bonds as is the case in eukaryotes and most bacteria.
Biologi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is created by the polymerization of glucose?
A. cellulose
B. glucose
C. carbonate
D. methane
Answer:
|
|
sciq-985
|
multiple_choice
|
What do all cells have in common?
|
[
"same function",
"life span",
"same shape",
"small size"
] |
D
|
Relavent Documents:
Document 0:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do all cells have in common?
A. same function
B. life span
C. same shape
D. small size
Answer:
|
|
sciq-605
|
multiple_choice
|
What type of speciation occurs when groups from the same species are geographically isolated for long periods?
|
[
"symbiotic",
"allopatric",
"prokaryotic",
"asexual"
] |
B
|
Relavent Documents:
Document 0:::
Reinforcement is a process within speciation where natural selection increases the reproductive isolation between two populations of species by reducing the production of hybrids. Evidence for speciation by reinforcement has been gathered since the 1990s, and along with data from comparative studies and laboratory experiments, has overcome many of the objections to the theory. Differences in behavior or biology that inhibit formation of hybrid zygotes are termed prezygotic isolation. Reinforcement can be shown to be occurring (or to have occurred in the past) by measuring the strength of prezygotic isolation in a sympatric population in comparison to an allopatric population of the same species. Comparative studies of this allow for determining large-scale patterns in nature across various taxa. Mating patterns in hybrid zones can also be used to detect reinforcement. Reproductive character displacement is seen as a result of reinforcement, so many of the cases in nature express this pattern in sympatry. Reinforcement's prevalence is unknown, but the patterns of reproductive character displacement are found across numerous taxa (vertebrates, invertebrates, plants, and fungi), and is considered to be a common occurrence in nature. Studies of reinforcement in nature often prove difficult, as alternative explanations for the detected patterns can be asserted. Nevertheless, empirical evidence exists for reinforcement occurring across various taxa and its role in precipitating speciation is conclusive.
Evidence from nature
Amphibians
The two frog species Litoria ewingi and L. verreauxii live in southern Australia with their two ranges overlapping. The species have very similar calls in allopatry, but express clinal variation in sympatry, with notable distinctness in calls that generate female preference discrimination. The zone of overlap sometimes forms hybrids and is thought to originate by secondary contact of once fully allopatric populations.
Allopatric populat
Document 1:::
When speciation is not driven by (or strongly correlated with) divergent natural selection, it can be said to be nonecological, so as to distinguish it from the typical definition of ecological speciation: "It is useful to consider ecological speciation as its own form of species formation because it focuses on an explicit mechanism of speciation: namely divergent natural selection. There are numerous ways other than via divergent natural selection in which populations might become genetically differentiated and reproductively isolated." It is likely that many instances of nonecological speciation are allopatric, especially when the organisms in question are poor dispersers (e.g., land snails, salamanders), however sympatric nonecological speciation may also be possible, especially when accompanied by an "instant" (at least in evolutionary time) loss of reproductive compatibility, as when polyploidization happens. Other potential mechanisms for nonecological speciation include mutation-order speciation and changes in chirality in gastropods.
Nonecological speciation might not be accompanied by strong morphological differentiation, so might give rise to cryptic species, however there are some species that are difficult for humans to differentiate that are strongly differentiated with respect to their resource use, and so are likely a result of ecological speciation (e.g., host shifts in parasites or phytophagous insects). When species recognition/sexual selection plays a strong role in maintaining species boundaries, the species generated by nonecological speciation might be straightforward for humans to differentiate, as in some odonates.
See also
Nonadaptive radiation
Document 2:::
Ecological speciation is a form of speciation arising from reproductive isolation that occurs due to an ecological factor that reduces or eliminates gene flow between two populations of a species. Ecological factors can include changes in the environmental conditions in which a species experiences, such as behavioral changes involving predation, predator avoidance, pollinator attraction, and foraging; as well as changes in mate choice due to sexual selection or communication systems. Ecologically-driven reproductive isolation under divergent natural selection leads to the formation of new species. This has been documented in many cases in nature and has been a major focus of research on speciation for the past few decades.
Ecological speciation has been defined in various ways to identify it as distinct from nonecological forms of speciation. The evolutionary biologist Dolph Schluter defines it as "the evolution of reproductive isolation between populations or subsets of a single population by adaptation to different environments or ecological niches", while others believe natural selection is the driving force. The key difference between ecological speciation and other kinds of speciation is that it is triggered by divergent natural selection among different habitats, as opposed to other kinds of speciation processes like random genetic drift, the fixation of incompatible mutations in populations experiencing similar selective pressures, or various forms of sexual selection not involving selection on ecologically relevant traits. Ecological speciation can occur either in allopatry, sympatry, or parapatry—the only requirement being that speciation occurs as a result of adaptation to different ecological or micro-ecological conditions.
Ecological speciation can occur pre-zygotically (barriers to reproduction that occur before the formation of a zygote) or post-zygotically (barriers to reproduction that occur after the formation of a zygote). Examples of pre-zygotic
Document 3:::
In biology, two related species or populations are considered sympatric when they exist in the same geographic area and thus frequently encounter one another. An initially interbreeding population that splits into two or more distinct species sharing a common range exemplifies sympatric speciation. Such speciation may be a product of reproductive isolation – which prevents hybrid offspring from being viable or able to reproduce, thereby reducing gene flow – that results in genetic divergence. Sympatric speciation may, but need not, arise through secondary contact, which refers to speciation or divergence in allopatry followed by range expansions leading to an area of sympatry. Sympatric species or taxa in secondary contact may or may not interbreed.
Types of populations
Four main types of population pairs exist in nature. Sympatric populations (or species) contrast with parapatric populations, which contact one another in adjacent but not shared ranges and do not interbreed; peripatric species, which are separated only by areas in which neither organism occurs; and allopatric species, which occur in entirely distinct ranges that are neither adjacent nor overlapping. Allopatric populations isolated from one another by geographical factors (e.g., mountain ranges or bodies of water) may experience genetic—and, ultimately, phenotypic—changes in response to their varying environments. These may drive allopatric speciation, which is arguably the dominant mode of speciation.
Evolving definitions and controversy
The lack of geographic isolation as a definitive barrier between sympatric species has yielded controversy among ecologists, biologists, botanists, and zoologists regarding the validity of the term. As such, researchers have long debated the conditions under which sympatry truly applies, especially with respect to parasitism. Because parasitic organisms often inhabit multiple hosts during a life cycle, evolutionary biologist Ernst Mayr stated that internal parasit
Document 4:::
Allopatric speciation () – also referred to as geographic speciation, vicariant speciation, or its earlier name the dumbbell model – is a mode of speciation that occurs when biological populations become geographically isolated from each other to an extent that prevents or interferes with gene flow.
Various geographic changes can arise such as the movement of continents, and the formation of mountains, islands, bodies of water, or glaciers. Human activity such as agriculture or developments can also change the distribution of species populations. These factors can substantially alter a region's geography, resulting in the separation of a species population into isolated subpopulations. The vicariant populations then undergo genetic changes as they become subjected to different selective pressures, experience genetic drift, and accumulate different mutations in the separated populations' gene pools. The barriers prevent the exchange of genetic information between the two populations leading to reproductive isolation. If the two populations come into contact they will be unable to reproduce—effectively speciating. Other isolating factors such as population dispersal leading to emigration can cause speciation (for instance, the dispersal and isolation of a species on an oceanic island) and is considered a special case of allopatric speciation called peripatric speciation.
Allopatric speciation is typically subdivided into two major models: vicariance and peripatric. Both models differ from one another by virtue of their population sizes and geographic isolating mechanisms. The terms allopatry and vicariance are often used in biogeography to describe the relationship between organisms whose ranges do not significantly overlap but are immediately adjacent to each other—they do not occur together or only occur within a narrow zone of contact. Historically, the language used to refer to modes of speciation directly reflected biogeographical distributions. As such, allopa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of speciation occurs when groups from the same species are geographically isolated for long periods?
A. symbiotic
B. allopatric
C. prokaryotic
D. asexual
Answer:
|
|
ai2_arc-913
|
multiple_choice
|
Which question can be answered using an investigation done by a science class?
|
[
"What is the climate on Jupiter?",
"Do plants grow differently with and without light?",
"Where is the warmest place on Earth that has life?",
"How far do monarch butterflies migrate?"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Planetary oceanography also called astro-oceanography or exo-oceanography is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of diamond in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for subsurface water oceans' existence elsewhere in t
Document 2:::
Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments.
Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them.
Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment.
History
The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti
Document 3:::
The European Astrobiology Network Association (EANA) coordinates and facilities research expertise in astrobiology in Europe.
EANA was created in 2001 to coordinate the different European centers in astrobiology and the related fields previously organized in paleontology, geology, atmospheric physics, planetary science and stellar physics.
The association is administered by an Executive Council that is elected every three years and represents the European nations active in the field, as Austria, Belgium, France, Germany, Italy, Portugal, Spain, etc.
The EANA Executive Council is composed of a president, two vice-presidents, a treasurer and two secretaries, and councillors. Further information about the current Executive Council can be founded at http://www.eana-net.eu/index.php?page=Discover/eananetwork.
The EANA association strongly supports the AbGradE – Astrobiology Graduates in Europe, which is an independent organisation that aim to support early-career scientists and students in astrobiology.
Objectives
The specific objectives of EANA are to:
bring together active European researchers and link their research programs
fund exchange visits between laboratories
optimize the sharing of information and resources facilities for research
promote this field of research to European funding agencies and politicians
promote research on extremophiles of relevance to environmental issues in Europe
interface with the Research Network with European bodies (e.g. European Space Agency, and the European Commission)
attract young scientists to participate
promote public interest in astrobiology, and to educate the younger generation
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which question can be answered using an investigation done by a science class?
A. What is the climate on Jupiter?
B. Do plants grow differently with and without light?
C. Where is the warmest place on Earth that has life?
D. How far do monarch butterflies migrate?
Answer:
|
|
sciq-3841
|
multiple_choice
|
What orbits an atoms' nucleus?
|
[
"positively charged protons",
"isotopes",
"neutrally charged ions",
"negatively charged electrons"
] |
D
|
Relavent Documents:
Document 0:::
A fixed orbit is the concept, in atomic physics, where an electron is considered to remain in a specific orbit, at a fixed distance from an atom's nucleus, for a particular energy level.
The concept was promoted by quantum physicist Niels Bohr c. 1913.
The idea of the fixed orbit is considered a major component of the Bohr model (or Bohr theory).
Document 1:::
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though
Document 2:::
An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to
Document 3:::
Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties.
The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy.
The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number.
For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively.
Isotope vs. nuclide
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over
Document 4:::
Photoinduced charge separation is the process of an electron in an atom or molecule, being excited to a higher energy level by the absorption of a photon and then leaving the atom or molecule to free space, or to a nearby electron acceptor.
Rutherford model
An atom consists of a positively-charged nucleus surrounded by bound electrons. The nucleus consists of uncharged neutrons and positively charged protons. Electrons are negatively charged. In the early part of the twentieth century Ernest Rutherford suggested that the electrons orbited the dense central nucleus in a manner analogous to planets orbiting the Sun. The centripetal force required to keep the electrons in orbit was provided by the Coulomb force of the protons in the nucleus acting upon the electrons; just like the gravitational force of the Sun acting on a planet provides the centripetal force necessary to keep the planet in orbit.
This model, although appealing, doesn't hold true in the real world. Synchrotron radiation would cause the orbiting electron to lose orbital energy and spiral inward since the vector quantity of acceleration of the particle multiplied by its mass (the value of the force required to keep the electron in circular motion) would be less than the electrical force the proton applied to the electron.
Once the electron spiralled into the nucleus the electron would combine with a proton to form a neutron, and the atom would cease to exist. This model is clearly wrong.
Bohr model
In 1913, Niels Bohr refined the Rutherford model by stating that the electrons existed in discrete quantized states called energy levels. This meant that the electrons could only occupy orbits at certain energies. The laws of quantum physics apply here, and they don't comply with the laws of classical newtonian mechanics.
An electron which is stationary and completely free from the atom has an energy of 0 joules (or 0 electronvolts). An electron which is described as being at the "ground state" has a (ne
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What orbits an atoms' nucleus?
A. positively charged protons
B. isotopes
C. neutrally charged ions
D. negatively charged electrons
Answer:
|
|
sciq-9608
|
multiple_choice
|
All atoms have equal numbers of what two particles?
|
[
"nuclei and neutrons",
"photons and protons",
"electrons and neutrons",
"electrons and protons"
] |
D
|
Relavent Documents:
Document 0:::
Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties.
The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy.
The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number.
For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively.
Isotope vs. nuclide
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over
Document 1:::
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though
Document 2:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 3:::
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 .
In the SI system of units, the value of the elementary charge is exactly defined as = coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one.
In the centimetre–gram–second system of units (CGS), the corresponding quantity is .
Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865.
As a unit
In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron.
In other natural unit systems, the unit of charge is defined as with the result that
where is the fine-structure constant, is the speed of light, is
Document 4:::
An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All atoms have equal numbers of what two particles?
A. nuclei and neutrons
B. photons and protons
C. electrons and neutrons
D. electrons and protons
Answer:
|
|
sciq-1431
|
multiple_choice
|
What process is the primary function of the branching internal tubules called protonephridia?
|
[
"thermoregulation",
"osmoregulation",
"enculturation",
"calcification"
] |
B
|
Relavent Documents:
Document 0:::
The phragmosome is a sheet of cytoplasm forming in highly vacuolated plant cells in preparation for mitosis. In contrast to animal cells, plant cells often contain large central vacuoles occupying up to 90% of the total cell volume and pushing the nucleus against the cell wall. In order for mitosis to occur, the nucleus has to move into the center of the cell. This happens during G2 phase of the cell cycle.
Initially, cytoplasmic strands form that penetrate the central vacuole and provide pathways for nuclear migration. Actin filaments along these cytoplasmic strands pull the nucleus into the center of the cell. These cytoplasmic strands fuse into a transverse sheet of cytoplasm along the plane of future cell division, forming the phragmosome. Phragmosome formation is only clearly visible in dividing plant cells that are highly vacuolated.
Just before mitosis, a dense band of microtubules appears around the phragmosome and the future division plane just below the plasma membrane. This preprophase band marks the equatorial plane of the future mitotic spindle as well as the future fusion sites for the new cell plate with the existing cell wall. It disappears as soon as the nuclear envelope breaks down and the mitotic spindle forms.
When mitosis is completed, the cell plate and new cell wall form starting from the center along the plane occupied by the phragmosome. The cell plate grows outwards until it fuses with the cell wall of the dividing cell at exactly the spots predicted by the preprophase band.
Document 1:::
Ciliogenesis is defined as the building of the cell's antenna (primary cilia) or extracellular fluid mediation mechanism (motile cilium). It includes the assembly and disassembly of the cilia during the cell cycle. Cilia are important organelles of cells and are involved in numerous activities such as cell signaling, processing developmental signals, and directing the flow of fluids such as mucus over and around cells. Due to the importance of these cell processes, defects in ciliogenesis can lead to numerous human diseases related to non-functioning cilia. Ciliogenesis may also play a role in the development of left/right handedness in humans.
Cilia formation
Ciliogenesis occurs through an ordered set of steps. First, the basal bodies from centrioles must migrate to the surface of the cell and attach to the cortex. Along the way, the basal bodies attach to membrane vesicles and the basal body/membrane vesicle complex fuses with the plasma membrane of the cell. Fusion with the plasma membrane is likely what forms the membrane of the cilia. The alignment of the forming cilia is determined by the original positioning and orientation of the basal bodies. Once the alignment is determined, axonemal microtubules extend from the basal body and go beneath the developing ciliary membrane, forming the cilia.
Proteins must be synthesized in the cytoplasm of the cell and cannot be synthesized within cilia. For the cilium to elongate, proteins must be selectively imported from the cytoplasm into the cilium and transported to the tip of the cilium by intraflagellar transport (IFT). Once the cilium is completely formed, it continues to incorporate new tubulin at the tip of the cilia. However, the cilium does not elongate further, because older tubulin is simultaneously degraded. This requires an active mechanism that maintains ciliary length. Impairments in these mechanisms can affect the motility of the cell and cell signaling between cells.
Ciliogenesis types
Mo
Document 2:::
In cell biology, the cleavage furrow is the indentation of the cell's surface that begins the progression of cleavage, by which animal and some algal cells undergo cytokinesis, the final splitting of the membrane, in the process of cell division. The same proteins responsible for muscle contraction, actin and myosin, begin the process of forming the cleavage furrow, creating an actomyosin ring. Other cytoskeletal proteins and actin binding proteins are involved in the procedure.
Mechanism
Plant cells do not perform cytokinesis through this exact method but the two procedures are not totally different. Animal cells form an actin-myosin contractile ring within the equatorial region of the cell membrane that constricts to form the cleavage furrow. In plant cells, Golgi vesicle secretions form a cell plate or septum on the equatorial plane of the cell wall by the action of microtubules of the phragmoplast. The cleavage furrow in animal cells and the phragmoplast in plant cells are complex structures made up of microtubules and microfilaments that aide in the final separation of the cells into two identical daughter cells.
Cell cycle
The cell cycle begins with interphase when the DNA replicates, the cell grows and prepares to enter mitosis. Mitosis includes four phases: prophase, metaphase, anaphase, and telophase. Prophase is the initial phase when spindle fibers appear that function to move the chromosomes toward opposite poles. This spindle apparatus consists of microtubules, microfilaments and a complex network of various proteins. During metaphase, the chromosomes line up using the spindle apparatus in the middle of the cell along the equatorial plate. The chromosomes move to opposite poles during anaphase and remain attached to the spindle fibers by their centromeres. Animal cell cleavage furrow formation is caused by a ring of actin microfilaments called the contractile ring, which forms during early anaphase. Myosin is present in the region of the contracti
Document 3:::
A protonema (plural: protonemata) is a thread-like chain of cells that forms the earliest stage of development of the gametophyte (the haploid phase) in the life cycle of mosses. When a moss first grows from a spore, it starts as a germ tube, which lengthens and branches into a filamentous complex known as a protonema, which develops into a leafy gametophore, the adult form of a gametophyte in bryophytes.
Moss spores germinate to form an alga-like filamentous structure called the protonema. It represents the juvenile gametophyte. While the protonema is growing by apical cell division, at some stage, under the influence of the phytohormone cytokinin, buds are induced which grow by three-faced apical cells. These give rise to gametophores, stems and leaf like structures. Bryophytes do not have true leaves (megaphylls). Protonemata are characteristic of all mosses and some liverworts but are absent from hornworts.
Protonemata of mosses are composed of two cell types: chloronemata, which form upon germination, and caulonemata, which later differentiate from chloronemata and on which buds are formed, which then differentiate to gametophores.
Document 4:::
In biology, solenocytes are elongated, flagellated cells commonly found in lower invertebrates, such as flatworms (phylum Platyhelminthes), as well as in chordates (sub-phylum Cephalochordata) and several other animal species. In terms of function, solenocytes play a significant role in the excretory systems of their host organism(s). For example, the lancelets, also referred to as amphioxus (genus Branchiostoma), utilize solenocytic protonephridia to perform excretion. In addition to excretion, these cells contribute to ion regulation and osmoregulation. With this in mind, solenocytes form subtypes of protonephridium and are often compared to another specialized excretory cell type, i.e., flame cells. Solenocytes have flagella, while flame cells are generally ciliated.
Cellular structure and configuration
Solenocytes are mesoderm-derived and morphologically diverse cells containing a cytoplasmic cap or enclosed cell body with a nucleus residing in its core. A long tubule is attached to the cell body, and within its intracellular lumen lies either one or two long flagella. The continuously moving vibratile flagella extend from a protein structure, referred to as the basal body, found at the base of the flagellar structure. Extending through the length of the tubule, the flagella are able to protrude into the protonephridium lumen rather designedly (see Figure 1).
The tubule wall structure is composed of thin, pillar-like rods perforated by tiny openings. These pore spaces are likely the site of interstitial fluid filtration.
A nephridium contains approximately 500 solenocytes, each of which is roughly 50 microns in length (this measure includes the nucleated cell body and tubule). The excretory organ of Amphioxus (genus Branchiostoma) belcheri contains clusters of solenocytes (the majority of which are situated along the ligamentum denticulatum coelomic surface). These clusters are composed at patterned intervals, generating groups amongst the renal tubules of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What process is the primary function of the branching internal tubules called protonephridia?
A. thermoregulation
B. osmoregulation
C. enculturation
D. calcification
Answer:
|
|
sciq-7005
|
multiple_choice
|
Combining nonpolar olive oil and polar vinegar yields what type of mixture?
|
[
"omnigeneous",
"amorphous",
"homogeneous",
"heterogeneous"
] |
D
|
Relavent Documents:
Document 0:::
A simple lipid is a fatty acid ester of different alcohols and carries no other substance. These lipids belong to a heterogeneous class of predominantly nonpolar compounds, mostly insoluble in water, but soluble in nonpolar organic solvents such as chloroform and benzene.
Simple lipids: esters of fatty acids with various alcohols.
a. Fats: esters of fatty acids with glycerol. Oils are fats in the liquid state. Fats are also called triglycerides because all the three hydroxyl groups of glycerol are esterified.
b. Waxes: Solid esters of long-chain fatty acids such as palmitic acid with aliphatic or alicyclic higher molecular weight monohydric alcohols. Waxes are water-insoluble due to the weakly polar nature of the ester group.
See also
Lipid
Lipids
Document 1:::
A heteroazeotrope is an azeotrope where the vapour phase coexists with two liquid phases.
Sketch of a T-x/y equilibrium curve of a typical heteroazeotropic mixture
Examples of heteroazeotropes
Benzene - Water NBP 69.2 °C
Dichloromethane - Water NBP 38.5 °C
n-Butanol - Water NBP 93.5 °C
Toluene - Water NBP 82 °C
Continuous heteroazeotropic distillation
Heterogeneous distillation means that during the distillation the liquid phase of the mixture is immiscible.
In this case on the plates can be two liquid phases and the top vapour condensate splits in two liquid phases, which can be separated in a decanter.
The simplest case of continuous heteroazeotropic distillation is the separation of a binary heterogeneous azeotropic mixture. In this case the system contains two columns and a decanter. The fresh feed (A-B) is added into the first column. (The feed may also be added into the decanter directly or into the second column depending on the composition of the mixture). From the decanter the A-rich phase is withdrawn as reflux into the first column while the B-rich phase is withdrawn as reflux into the second column. This mean the first column produces "A" and the second column produces "B" as a bottoms product. In industry the butanol-water mixture is separated with this technique.
At the previous case the binary system forms already a heterogeneous azeotrope. The other application of the heteroazeotropic distillation is the separation of a binary system (A-B) forming a homogeneous azeotrope. In this case an entrainer or solvent is added to the mixture in order to form an heteroazeotrope with one or both of the components in order to help the separation of the original A-B mixture.
Batch heteroazeotropic distillation
Batch heteroazeotropic distillation is an efficient method for the separation of azeotropic and
low relative volatility (low α) mixtures. A third component (entrainer, E) is added to the
binary A-B mixture, which makes the separation of A and B poss
Document 2:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 3:::
In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids.
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them).
Characteristics of mixtures
All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways:
the substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation.
there is little or no energy change when a mixture forms (see Enthalpy of mixing).
The substances in a mixture keep its separate properties.
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
mixtures have variable compositions, while compounds have a fixed, definite formula.
when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties
Document 4:::
Binary liquid is a type of chemical combination, which creates a special reaction or feature as a result of mixing two liquid chemicals, that are normally inert or have no function by themselves. A number of chemical products are produced as a result of mixing two chemicals as a binary liquid, such as plastic foams and some explosives.
See also
Binary chemical weapon
Thermophoresis
Percus-Yevick equation
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Combining nonpolar olive oil and polar vinegar yields what type of mixture?
A. omnigeneous
B. amorphous
C. homogeneous
D. heterogeneous
Answer:
|
|
sciq-603
|
multiple_choice
|
What tissue do clubmosses have that mosses do not?
|
[
"vascular tissue",
"dioxide tissue",
"nuclei",
"chorophyll"
] |
A
|
Relavent Documents:
Document 0:::
Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones.
Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific.
Characteristics
Botanists define vascular plants by three primary characteristics:
Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes.
In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with
Document 1:::
Tree moss is a common name for several organisms and may refer to:
Climacium, a genus of mosses which resemble miniature trees
Climacium dendroides, a common species of Climacium
Evernia, a genus of lichens which grow on trees
Usnea, a genus of lichens which grow on trees
Document 2:::
The International Moss Stock Center (IMSC) is a biorepository which is specialized in collecting, preserving and distributing moss plants of a high value of scientific research. The IMSC is located at the Faculty of Biology, Department of Plant Biotechnology, at the Albert-Ludwigs-University of Freiburg, Germany.
Moss collection
The moss collection of the IMSC currently includes various ecotypes of Physcomitrella patens, Physcomitrium and Funaria as well as several transgenic and mutant lines of Physcomitrella patens, including knockout mosses.
Storage conditions
The long-term storage of moss samples in the IMSC is carried out via cryopreservation in the gas phase of liquid nitrogen at temperatures below −135 °C in special freezer containers.
It has been shown for Physcomitrella patens that the regeneration rate after cryopreservation is 100%.
Trackable accession numbers which may be used for citation purposes in
publications are automatically assigned to all samples.
Financial support
The IMSC is supported financially by the Chair Plant Biotechnology of Prof. Ralf Reski and the Centre for Biological Signalling Studies (bioss).
Document 3:::
Occurrence
40% of mosses are monoicious.
Bryophyte sexuality
Bryophytes have life cycles that are gametophyte dominated. The longer lived, more prominent autotrophic plant is the gametophyte. The sporophyte in mosses and liverworts consists of an unbranched stalk (a seta) bearing a single
Document 4:::
There are at least 23 species of clubmosses and 153 species of mosses found in the state of Montana in the United States. The Montana Natural Heritage Program has identified a number of clubmoss and moss species as species of concern.
Clubmosses
Clubmosses
Class: Lycopodiopsida
Order: Lycopodiales Family: Lycopodiaceae
Alpine clubmoss, Diphasiastrum alpinum
Chinese clubmoss, Huperzia chinensis
Common clubmoss, Lycopodium clavatum
Northern bog clubmoss, Lycopodiella inundata
One-cone clubmoss, Lycopodium lagopus
Pacific clubmoss, Huperzia haleakalae
Sitka clubmoss, Diphasiastrum sitchense
Stiff clubmoss, Spinulum annotinum
Trailing clubmoss, Diphasiastrum complanatum
Tree groundpine, Dendrolycopodium dendroideum
Western clubmoss, Huperzia occidentalis
Quillworts
Class: Isoetopsida
Order: Isoetales, Family: Isoetaceae
Bolander's quillwort, Isoetes bolanderi
Howell's quillwort, Isoetes howellii
Nuttall's quillwort, Isoetes nuttallii
Spiny-spored quillwort, Isoetes echinospora
Western quillwort, Isoetes occidentalis
Spike-mosses
Class: Isoetopsida
Order: Selaginellales, Family: Selaginellaceae
Lesser spikemoss, Selaginella densa
Low spikemoss, Selaginella selaginoides
Wallace's spikemoss, Selaginella wallacei
Watson's spikemoss, Selaginella watsonii
Mosses
Granite mosses
Class: Andreaeopsida Order: Andreaeales, Family: Andreaeaceae
Blytt's andreaea moss, Andreaea blyttii
Peat mosses
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What tissue do clubmosses have that mosses do not?
A. vascular tissue
B. dioxide tissue
C. nuclei
D. chorophyll
Answer:
|
|
sciq-6120
|
multiple_choice
|
Sutures, gomphoses, and syndesmoses are types of what, which are found where adjacent bones are strongly united by connective tissue?
|
[
"cartilage",
"ligaments",
"fibrous joints",
"metallic joints"
] |
C
|
Relavent Documents:
Document 0:::
Outline
h1.00: Cytology
h2.00: General histology
H2.00.01.0.00001: Stem cells
H2.00.02.0.00001: Epithelial tissue
H2.00.02.0.01001: Epithelial cell
H2.00.02.0.02001: Surface epithelium
H2.00.02.0.03001: Glandular epithelium
H2.00.03.0.00001: Connective and supportive tissues
H2.00.03.0.01001: Connective tissue cells
H2.00.03.0.02001: Extracellular matrix
H2.00.03.0.03001: Fibres of connective tissues
H2.00.03.1.00001: Connective tissue proper
H2.00.03.1.01001: Ligaments
H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue
H2.00.03.3.00001: Reticular tissue
H2.00.03.4.00001: Adipose tissue
H2.00.03.5.00001: Cartilage tissue
H2.00.03.6.00001: Chondroid tissue
H2.00.03.7.00001: Bone tissue; Osseous tissue
H2.00.04.0.00001: Haemotolymphoid complex
H2.00.04.1.00001: Blood cells
H2.00.04.1.01001: Erythrocyte; Red blood cell
H2.00.04.1.02001: Leucocyte; White blood cell
H2.00.04.1.03001: Platelet; Thrombocyte
H2.00.04.2.00001: Plasma
H2.00.04.3.00001: Blood cell production
H2.00.04.4.00001: Postnatal sites of haematopoiesis
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
Document 1:::
The collateral ligaments of metatarsophalangeal joints are strong, rounded cords, placed one on either side of each joint, and attached, by one end, to the posterior tubercle on the side of the head of the metatarsal bone, and, by the other, to the contiguous extremity of the phalanx.
The place of dorsal ligaments is supplied by the extensor tendons on the dorsal surfaces of the joints.
Document 2:::
In human anatomy, the body of femur (or shaft of femur) is the almost cylindrical, long part of the femur. It is a little broader above than in the center, broadest and somewhat flattened from before backward below. It is slightly arched, so as to be convex in front, and concave behind, where it is strengthened by a prominent longitudinal ridge, the linea aspera.
It presents for examination three borders, separating three surfaces.
Of the borders, one, the linea aspera, is posterior, one is medial, and the other, lateral.
Borders
The borders of the femur are the linea aspera, a medial border, and a lateral border.
Linea aspera border
The linea aspera is a prominent longitudinal ridge or crest, on the middle third of the bone, presenting a medial and a lateral lip, and a narrow rough, intermediate line.
Above, the linea aspera is prolonged by three ridges.
The lateral ridge termed the gluteal tuberosity is very rough, and runs almost vertically upward to the base of the greater trochanter. It gives attachment to part of the gluteus maximus: its upper part is often elongated into a roughened crest, on which a more or less well-marked, rounded tubercle, the third trochanter, is occasionally developed.
The intermediate ridge or pectineal line is continued to the base of the lesser trochanter and gives attachment to the pectineus; the medial ridge is lost in the intertrochanteric line; between these two a portion of the iliacus is inserted.
Below, the linea aspera is prolonged into two ridges, enclosing between them a triangular area, the popliteal surface, upon which the popliteal artery rests.
Of these two ridges, the lateral is the more prominent, and descends to the summit of the lateral condyle.
The medial is less marked, especially at its upper part, where it is crossed by the femoral artery.
It ends below at the summit of the medial condyle, in a small tubercle, the adductor tubercle, which affords insertion to the tendon of the adductor magnus.
From t
Document 3:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 4:::
Dense regular connective tissue (DRCT) provides connection between different tissues in the human body. The collagen fibers in dense regular connective tissue are bundled in a parallel fashion. DRCT is divided into white fibrous connective tissue and yellow fibrous connective tissue, both of which occur in two forms: cord arrangement and sheath arrangement.
In cord arrangement, bundles of collagen and matrix are distributed in regular alternate patterns. In sheath arrangement, collagen bundles and matrix are distributed in irregular patterns, sometimes in the form of a network. It is similar to areolar tissue, but in DRCT elastic fibers are completely absent.
Structures formed
An example of their use is in tendons, which connect muscle to bone and derive their strength from the regular, longitudinal arrangement of bundles of collagen fibers.
Ligaments bind bone to bone and are similar in structure to tendons.
Aponeuroses are layers of flat, broad tendons that join muscles and the body parts the muscles act upon, whether it be bone or muscle.
Functions
Dense regular connective tissue has great tensile strength that resists pulling forces especially well in one direction.
DRCT has a very poor blood supply, which is why damaged tendons and ligaments are slow to heal.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Sutures, gomphoses, and syndesmoses are types of what, which are found where adjacent bones are strongly united by connective tissue?
A. cartilage
B. ligaments
C. fibrous joints
D. metallic joints
Answer:
|
|
sciq-5056
|
multiple_choice
|
A magnet can exert force on objects without touching them, as long as they are within what?
|
[
"gravitational field",
"magnetic field",
"molecular field",
"audio field"
] |
B
|
Relavent Documents:
Document 0:::
In electromagnetism, the magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field, expressed as a vector. Examples of objects that have magnetic moments include loops of electric current (such as electromagnets), permanent magnets, elementary particles (such as electrons), composite particles (such as protons and neutrons), various molecules, and many astronomical objects (such as many planets, some moons, stars, etc).
More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, the component of the magnetic moment that can be represented by an equivalent magnetic dipole: a magnetic north and south pole separated by a very small distance. The magnetic dipole component is sufficient for small enough magnets or for large enough distances. Higher-order terms (such as the magnetic quadrupole moment) may be needed in addition to the dipole moment for extended objects.
The magnetic dipole moment of an object determines the magnitude of torque that the object experiences in a given magnetic field. Objects with larger magnetic moments experience larger torques when the same magnetic field is applied. The strength (and direction) of this torque depends not only on the magnitude of the magnetic moment but also on its orientation relative to the direction of the magnetic field. The magnetic moment may therefore be considered to be a vector. The direction of the magnetic moment points from the south to north pole of the magnet (inside the magnet).
The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object.
Definition, units, and measurement
Definition
The magnetic moment can be defined as a vector relating the aligning torque on the object from an externally applied magnetic
Document 1:::
Biomagnetism is the phenomenon of magnetic fields produced by living organisms; it is a subset of bioelectromagnetism. In contrast, organisms' use of magnetism in navigation is magnetoception and the study of the magnetic fields' effects on organisms is magnetobiology. (The word biomagnetism has also been used loosely to include magnetobiology, further encompassing almost any combination of the words magnetism, cosmology, and biology, such as "magnetoastrobiology".)
The origin of the word biomagnetism is unclear, but seems to have appeared several hundred years ago, linked to the expression "animal magnetism". The present scientific definition took form in the 1970s, when an increasing number of researchers began to measure the magnetic fields produced by the human body. The first valid measurement was actually made in 1963, but the field of research began to expand only after a low-noise technique was developed in 1970. Today the community of biomagnetic researchers does not have a formal organization, but international conferences are held every two years, with about 600 attendees. Most conference activity centers on the MEG (magnetoencephalogram), the measurement of the magnetic field of the brain.
Prominent researchers
David Cohen
John Wikswo
Samuel Williamson
See also
Bioelectrochemistry
Human magnetism
Magnetite
Magnetocardiography
Magnetoception - sensing of magnetic fields by organisms
Magnetoelectrochemistry
Magnetoencephalography
Magnetogastrography
Magnetomyography
SQUID
Notes
Further reading
Williamson SH, Romani GL, Kaufman L, Modena I, editors. Biomagnetism: An Interdisciplinary Approach. 1983. NATO ASI series. New York: Plenum Press.
Cohen, D. Boston and the history of biomagnetism. Neurology and Clinical Neurophysiology 2004; 30: 1.
History of Biomagnetism
Bioelectromagnetics
Magnetism
Document 2:::
Magnetobiology is the study of biological effects of mainly weak static and low-frequency magnetic fields, which do not cause heating of tissues. Magnetobiological effects have unique features that obviously distinguish them from thermal effects; often they are observed for alternating magnetic fields just in separate frequency and amplitude intervals. Also, they are dependent of simultaneously present static magnetic or electric fields and their polarization.
Magnetobiology is a subset of bioelectromagnetics. Bioelectromagnetism and biomagnetism are the study of the production of electromagnetic and magnetic fields by biological organisms. The sensing of magnetic fields by organisms is known as magnetoreception.
Biological effects of weak low frequency magnetic fields, less than about 0.1 millitesla (or 1 Gauss) and 100 Hz correspondingly, constitutes a physics problem. The effects look paradoxical, for the energy quantum of these electromagnetic fields is by many orders of value less than the energy scale of an elementary chemical act. On the other hand, the field intensity is not enough to cause any appreciable heating of biological tissues or irritate nerves by the induced electric currents.
Effects
An example of a magnetobiological effect is the magnetic navigation by migrant animals by means of magnetoreception.
Many animal orders, such as certain birds, marine turtles, reptiles, amphibians and salmonoid fishes are able to detect small variations of the geomagnetic field and its magnetic inclination to find their seasonal habitats. They are said to use an "inclination compass". Certain crustaceans, spiny lobsters, bony fish, insects and mammals have been found to use a "polarity compass", whereas in snails and cartilageous fish the type of compass is as yet unknown. Little is known about other vertebrates and arthropods. Their perception can be on the order of tens of nanoteslas.
Magnetic intensity as a component of the navigational ‘map’ of pigeons
Document 3:::
A magnetometer is a device that measures magnetic field or magnetic dipole moment. Different types of magnetometers measure the direction, strength, or relative change of a magnetic field at a particular location. A compass is one such device, one that measures the direction of an ambient magnetic field, in this case, the Earth's magnetic field. Other magnetometers measure the magnetic dipole moment of a magnetic material such as a ferromagnet, for example by recording the effect of this magnetic dipole on the induced current in a coil.
The first magnetometer capable of measuring the absolute magnetic intensity at a point in space was invented by Carl Friedrich Gauss in 1833 and notable developments in the 19th century included the Hall effect, which is still widely used.
Magnetometers are widely used for measuring the Earth's magnetic field, in geophysical surveys, to detect magnetic anomalies of various types, and to determine the dipole moment of magnetic materials. In an aircraft's attitude and heading reference system, they are commonly used as a heading reference. Magnetometers are also used by the military as a triggering mechanism in magnetic mines to detect submarines. Consequently, some countries, such as the United States, Canada and Australia, classify the more sensitive magnetometers as military technology, and control their distribution.
Magnetometers can be used as metal detectors: they can detect only magnetic (ferrous) metals, but can detect such metals at a much greater distance than conventional metal detectors, which rely on conductivity. Magnetometers are capable of detecting large objects, such as cars, at over , while a conventional metal detector's range is rarely more than .
In recent years, magnetometers have been miniaturized to the extent that they can be incorporated in integrated circuits at very low cost and are finding increasing use as miniaturized compasses (MEMS magnetic field sensor).
Introduction
Magnetic fields
Magnetic fi
Document 4:::
A magnetic circuit is made up of one or more closed loop paths containing a magnetic flux. The flux is usually generated by permanent magnets or electromagnets and confined to the path by magnetic cores consisting of ferromagnetic materials like iron, although there may be air gaps or other materials in the path. Magnetic circuits are employed to efficiently channel magnetic fields in many devices such as electric motors, generators, transformers, relays, lifting electromagnets, SQUIDs, galvanometers, and magnetic recording heads.
The relation between magnetic flux, magnetomotive force, and magnetic reluctance in an unsaturated magnetic circuit can be described by Hopkinson's law, which bears a superficial resemblance to Ohm's law in electrical circuits, resulting in a one-to-one correspondence between properties of a magnetic circuit and an analogous electric circuit. Using this concept the magnetic fields of complex devices such as transformers can be quickly solved using the methods and techniques developed for electrical circuits.
Some examples of magnetic circuits are:
horseshoe magnet with iron keeper (low-reluctance circuit)
horseshoe magnet with no keeper (high-reluctance circuit)
electric motor (variable-reluctance circuit)
some types of pickup cartridge (variable-reluctance circuits)
Magnetomotive force (MMF)
Similar to the way that electromotive force (EMF) drives a current of electrical charge in electrical circuits, magnetomotive force (MMF) 'drives' magnetic flux through magnetic circuits. The term 'magnetomotive force', though, is a misnomer since it is not a force nor is anything moving. It is perhaps better to call it simply MMF. In analogy to the definition of EMF, the magnetomotive force around a closed loop is defined as:
The MMF represents the potential that a hypothetical magnetic charge would gain by completing the loop. The magnetic flux that is driven is not a current of magnetic charge; it merely has the same relationshi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A magnet can exert force on objects without touching them, as long as they are within what?
A. gravitational field
B. magnetic field
C. molecular field
D. audio field
Answer:
|
|
sciq-6110
|
multiple_choice
|
What occurs when there is a sudden and large falling of rocks down a slope?
|
[
"avalanche",
"earthquake",
"landslide",
"tsunami"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Tollmann's bolide hypothesis is a hypothesis presented by Austrian palaeontologist Edith Kristan-Tollmann and geologist Alexander Tollmann in 1994. The hypothesis postulates that one or several bolides (asteroids or comets) struck the Earth around 7640 ± 200 years BCE, and a much smaller one approximately 3150 ± 200 BCE. The hypothesis tries to explain early Holocene extinctions and possibly legends of the Universal Deluge.
The claimed evidence for the event includes stratigraphic studies of tektites, dendrochronology, and ice cores (from Camp Century, Greenland) containing hydrochloric acid and sulfuric acid (indicating an energetic ocean strike) as well as nitric acids (caused by extreme heating of air).
Christopher Knight and Robert Lomas in their book, Uriel's Machine, argue that the 7640 BCE evidence is consistent with the dates of formation of a number of extant salt flats and lakes in dry areas of North America and Asia. They argue that these lakes are the remains of multiple-kilometer-high waves that penetrated deeply into continents as the result of oceanic strikes that they proposed occurred. Research by Quaternary geologists, palynologists, and others has been unable to confirm the validity of the hypothesis and proposes more frequently occurring geological processes for some of the data used for the hypothesis. The dating of ice cores and Australasian tektites has shown long time span differences between the proposed impact times and the impact ejecta products.
Scientific evaluation
Quaternary geologists, paleoclimatologists, and planetary geologists specialising in meteorite and comet impacts have rejected Tollmann's bolide hypothesis. They reject this hypothesis because:
The evidence offered to support the hypothesis can more readily be explained by more mundane and less dramatic geologic processes
Many of the events alleged to be associated with this impact occurred at the wrong time (i.e., many of the events occurred hundreds to thousands of y
Document 2:::
Sand boils or sand volcanoes occur when water under pressure wells up through a bed of sand. The water looks like it is boiling up from the bed of sand, hence the name.
Sand volcano
A sand volcano or sand blow is a cone of sand formed by the ejection of sand onto a surface from a central point. The sand builds up as a cone with slopes at the sand's angle of repose. A crater is commonly seen at the summit. The cone looks like a small volcanic cone and can range in size from millimetres to metres in diameter.
The process is often associated with soil liquefaction and the ejection of fluidized sand that can occur in water-saturated sediments during an earthquake. The New Madrid Seismic Zone exhibited many such features during the 1811–12 New Madrid earthquakes. Adjacent sand blows aligned in a row along a linear fracture within fine-grained surface sediments are just as common, and can still be seen in the New Madrid area.
In the past few years, much effort has gone into the mapping of liquefaction features to study ancient earthquakes. The basic idea is to map zones that are susceptible to the process and then go in for a closer look. The presence or absence of soil liquefaction features is strong evidence of past earthquake activity, or lack thereof.
These are to be contrasted with mud volcanoes, which occur in areas of geyser or subsurface gas venting.
Flood protection structures
Sand boils can be a mechanism contributing to liquefaction and levee failure during floods. This effect is caused by a difference in pressure on two sides of a levee or dike, most likely during a flood. This process can result in internal erosion, whereby the removal of soil particles results in a pipe through the embankment. The creation of the pipe will quickly pick up pace and will eventually result in failure of the embankment.
A sand boil is difficult to stop. The most effective method is by creating a body of water above the boil to create enough pressure to slow the flow of
Document 3:::
The Newmark's sliding block analysis method is an engineering that calculates permanent displacements of soil slopes (also embankments and dams) during seismic loading. Newmark analysis does not calculate actual displacement, but rather is an index value that can be used to provide an indication of the structures likelihood of failure during a seismic event. It is also simply called Newmark's analysis or Sliding block method of slope stability analysis.
History
The method is an extension of the Newmark's direct integration method originally proposed by Nathan M. Newmark in 1943. It was applied to the sliding block problem in a lecture delivered by him in 1965 in the British Geotechnical Association's 5th Rankine Lecture in London and published later in the Association's scientific journal Geotechnique. The extension owes a great deal to Nicholas Ambraseys whose doctoral thesis on the seismic stability of earth dams at Imperial College London in 1958 formed the basis of the method. At his Rankine Lecture, Newmark himself acknowledged Ambraseys' contribution to this method through various discussions between the two researchers while the latter was a visiting professor at the University of Illinois.
Method
According to Kramer, the Newmark method is an improvement over the traditional pseudo-static method which considered the seismic slope failure only at limiting conditions (i.e. when the Factor of Safety, FOS, became equal to 1) and providing information about the collapse state but no information about the induced deformations. The new method points out that when the FOS becomes less than 1 "failure" does not necessarily occur as the time for which this happens is very short. However, each time the FOS falls below unity, some permanent deformations occur which accumulate whenever FOS < 1. The method further suggests that a failing mass from the slope may be considered as a block of mass sliding (and therefore sliding block) on an inclined surface only when the i
Document 4:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs when there is a sudden and large falling of rocks down a slope?
A. avalanche
B. earthquake
C. landslide
D. tsunami
Answer:
|
|
sciq-9995
|
multiple_choice
|
Foresters commonly inoculate pine seedlings with a type of what to promote growth?
|
[
"proteins",
"soil",
"yeast",
"fungi"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Wood science, commonly referred to as wood sciences, is a scientific discipline that predominantly investigates elements associated with the formation, composition and macro- and microstructure of wood. It additionally delves into the biological, chemical, physical, and mechanical properties and characteristics of wood, as a natural lignocellulosic material.
A deep understanding of wood plays a pivotal role in various endeavors, such as the processing of wood, the production of wood-based materials like particleboard, fiberboard, OSB, plywood and other materials, as well as the utilization of wood and wood-based materials in construction and a wide array of products, including pulpwood, furniture, engineered wood products such as glued laminated timber, CLT, LVL, PSL, as well as pellets, briquettes, and numerous other products.
History
Initial comprehensive investigations in the field of wood science emerged at the start of the 20th century. The advent of contemporary wood research commenced in 1910, when the Forest Products Laboratory (FPL) was established in Madison, Wisconsin, USA. The Forest Products Laboratory played a fundamental role in wood science providing scientific research on wood and wood products in partnership with academia, industry, local and other institutions in North and South America and worldwide.
In the following years, many wood research institutes came into existence across almost all industrialized nations. A general overview of these institutes and laboratories is shown below:
1913: Institute of Wood and Pulp Chemistry Eberswalde (today's Eberswalde University for Sustainable Development), Germany
1913: Forest Products Laboratory Montreal, Canada
1918: Forest Products Laboratory Vancouver, Canada
1919: Forest Products Laboratory Melbourne, Australia
1923: Forest Products Research Laboratory, Princes Risborough, Great Britain
1929: Institute for Wood Science and Technology, Leningrant, St. Petersburg, USSR
1933: Centre Technique
Document 4:::
Xenohormesis is a hypothesis that posits that certain molecules such as plant polyphenols, which indicate stress in the plants, can have benefits of another organism (heterotrophs) which consumes it. Or in simpler terms, xenohormesis is interspecies hormesis. The expected benefits include improve lifespan and fitness, by activating the animal's cellular stress response.
This may be useful to evolve, as it gives possible cues about the state of the environment. If the plants an animal is eating have increased polyphenol content, it means the plant is under stress and may signal famines. Using the chemical cues the heterotophs could preemptively prepare and defend itself before conditions worsen. A possible example may be resveratrol, which is famously found in red wine, which modulates over two dozen receptors and enzymes in mammals.
Xenohormesis could also explain several phenomena seen in the ethno-pharmaceutical (traditional medicine) side of things. Such as in the case of cinnamon, which in several studies have shown to help treat type 2 diabetes, but hasn't been confirmed in meta analysis. This can be caused by the cinnamon used in one study differing from the other in xenohormic properties.
Some explanations as to why this works, is first and foremost, it could be a coincidence. Especially for cases which partially venomous products, cause a positive stress in the organism. The second is that it is a shared evolutionary attribute, as both animals and plants share a huge amount of homology between their pathways. The third is that there is evolutionary pressure to evolve to better respond to the molecules. The latter is proposed mainly by Howitz and his team.
There also might be the problem that our focus on maximizing the crop output, may be losing many of the xenohormetic advantages. Although the ideal conditions will cause the plant to increase its crop output it can also be argued it is loosing stress and therefore the hormesis. The honeybee colony colla
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Foresters commonly inoculate pine seedlings with a type of what to promote growth?
A. proteins
B. soil
C. yeast
D. fungi
Answer:
|
|
sciq-427
|
multiple_choice
|
Felsic, intermediate, mafic, and ultramafic are types of composition of what rock group?
|
[
"Sedimentary",
"asteroids",
"igneous",
"metamorphic"
] |
C
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate).
Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions.
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep.
This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle.
Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Characteristics
Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure r
Document 4:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Felsic, intermediate, mafic, and ultramafic are types of composition of what rock group?
A. Sedimentary
B. asteroids
C. igneous
D. metamorphic
Answer:
|
|
sciq-6947
|
multiple_choice
|
What kind of rock forms when material such as gravel, sand, silt or clay is compacted and cemented together?
|
[
"ingenious",
"limestone",
"craters",
"sedimentary"
] |
D
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
Document 2:::
The Q-slope method for rock slope engineering and rock mass classification is developed by Barton and Bar. It expresses the quality of the rock mass for slope stability using the Q-slope value, from which long-term stable, reinforcement-free slope angles can be derived.
The Q-slope value can be determined with:
Q-slope utilizes similar parameters to the Q-system which has been used for over 40 years in the design of ground support for tunnels and underground excavations. The first four parameters, RQD (rock quality designation), Jn (joint set number), Jr (joint roughness number) and Ja (joint alteration number) are the same as in the Q-system. However, the frictional resistance pair Jr and Ja can apply, when needed, to individual sides of a potentially unstable wedges. Simply applied orientation factors (0), like (Jr/Ja)1x0.7 for set J1 and (Jr/Ja)2x0.9 for set J2, provide estimates of overall whole-wedge frictional resistance reduction, if appropriate. The Q-system term Jw is replaced with Jwice, and takes into account a wider range of environmental conditions appropriate to rock slopes, which are exposed to the environment indefinitely. The conditions include the extremes of erosive intense rainfall, ice wedging, as may seasonally occur at opposite ends of the rock-type and regional spectrum. There are also slope-relevant SRF (strength reduction factor) categories.
Multiplication of these terms results in the Q-slope value, which can range between 0.001 (exceptionally poor) to 1000 (exceptionally good) for different rock masses.
A simple formula for the steepest slope angle (β), in degrees, not requiring reinforcement or support is given by:
Q-slope is intended for use in reinforcement-free site access road cuts, roads or railway cuttings, or individual benches in open cast mines. It is based on over 500 case studies in slopes ranging from 35 to 90 degrees in fresh hard rock slopes as well as weak, weathered and saprolitic rock slopes. Q-slope has also been a
Document 3:::
Quick clay, also known as Leda clay and Champlain Sea clay in Canada, is any of several distinctively sensitive glaciomarine clays found in Canada, Norway, Russia, Sweden, Finland, the United States and other locations around the world. The clay is so unstable that when a mass of quick clay is subjected to sufficient stress, the material behavior may drastically change from that of a particulate material to that of a watery fluid. Landslides occur because of the sudden soil liquefaction caused by external sollicitations such as vibrations induced by an earthquake, or massive rainfalls.
Quick clay main deposits
Quick clay is found only in countries close to the north pole, such as Russia; Canada; Norway; Sweden; and Finland; and in Alaska, United States; since they were glaciated during the Pleistocene epoch. In Canada, the clay is associated primarily with the Pleistocene-era Champlain Sea, in the modern Ottawa Valley, the St. Lawrence Valley, and the Saguenay River regions.
Quick clay has been the underlying cause of many deadly landslides. In Canada alone, it has been associated with more than 250 mapped landslides. Some of these are ancient, and may have been triggered by earthquakes.
Clay colloids stability
Quick clay has a remolded strength which is much less than its strength upon initial loading. This is caused by its highly unstable clay particle structure.
Quick clay is originally deposited in a marine environment. Clay mineral particles are always negatively charged because of the presence of permanent negative charges and pH dependent charges at their surface. Because of the need to respect electro-neutrality and a net zero electrical charge balance, these negative electrical charges are always compensated by the positive charges born by cations (such as Na+) adsorbed onto the surface of the clay, or present in the clay pore water. Exchangeable cations are present in the clay minerals interlayers and on the external basal planes of clay platelets. Ca
Document 4:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of rock forms when material such as gravel, sand, silt or clay is compacted and cemented together?
A. ingenious
B. limestone
C. craters
D. sedimentary
Answer:
|
|
sciq-1545
|
multiple_choice
|
What type of drugs can increase water loss by interfering with the recapture of solutes and water from the forming urine?
|
[
"diuretics",
"hallucinogens",
"disassociates",
"sedatives"
] |
A
|
Relavent Documents:
Document 0:::
In pharmacology the elimination or excretion of a drug is understood to be any one of a number of processes by which a drug is eliminated (that is, cleared and excreted) from an organism either in an unaltered form (unbound molecules) or modified as a metabolite. The kidney is the main excretory organ although others exist such as the liver, the skin, the lungs or glandular structures, such as the salivary glands and the lacrimal glands. These organs or structures use specific routes to expel a drug from the body, these are termed elimination pathways:
Urine
Tears
Perspiration
Saliva
Respiration
Milk
Faeces
Bile
Drugs are excreted from the kidney by glomerular filtration and by active tubular secretion following the same steps and mechanisms as the products of intermediate metabolism. Therefore, drugs that are filtered by the glomerulus are also subject to the process of passive tubular reabsorption. Glomerular filtration will only remove those drugs or metabolites that are not bound to proteins present in blood plasma (free fraction) and many other types of drugs (such as the organic acids) are actively secreted. In the proximal and distal convoluted tubules non-ionised acids and weak bases are reabsorbed both actively and passively. Weak acids are excreted when the tubular fluid becomes too alkaline and this reduces passive reabsorption. The opposite occurs with weak bases. Poisoning treatments use this effect to increase elimination, by alkalizing the urine causing forced diuresis which promotes excretion of a weak acid, rather than it getting reabsorbed. As the acid is ionised, it cannot pass through the plasma membrane back into the blood stream and instead gets excreted with the urine. Acidifying the urine has the same effect for weakly basic drugs.
On other occasions drugs combine with bile juices and enter the intestines. In the intestines the drug will join with the unabsorbed fraction of the administered dose and be eliminated with the faeces
Document 1:::
Biological half-life (elimination half-life, pharmacological half-life) is the time taken for concentration of a biological substance (such as a medication) to decrease from its maximum concentration (Cmax) to half of Cmax in the blood plasma. It is denoted by the abbreviation .
This is used to measure the removal of things such as metabolites, drugs, and signalling molecules from the body. Typically, the biological half-life refers to the body's natural detoxification (cleansing) through liver metabolism and through the excretion of the measured substance through the kidneys and intestines. This concept is used when the rate of removal is roughly exponential.
In a medical context, half-life explicitly describes the time it takes for the blood plasma concentration of a substance to halve (plasma half-life) its steady-state when circulating in the full blood of an organism. This measurement is useful in medicine, pharmacology and pharmacokinetics because it helps determine how much of a drug needs to be taken and how frequently it needs to be taken if a certain average amount is needed constantly. By contrast, the stability of a substance in plasma is described as plasma stability. This is essential to ensure accurate analysis of drugs in plasma and for drug discovery.
The relationship between the biological and plasma half-lives of a substance can be complex depending on the substance in question, due to factors including accumulation in tissues, protein binding, active metabolites, and receptor interactions.
Examples
Water
The biological half-life of water in a human is about 7 to 14 days. It can be altered by behavior. Drinking large amounts of alcohol will reduce the biological half-life of water in the body. This has been used to decontaminate patients who are internally contaminated with tritiated water. The basis of this decontamination method is to increase the rate at which the water in the body is replaced with new water.
Alcohol
The removal of ethan
Document 2:::
In physiology, dehydration is a lack of total body water, with an accompanying disruption of metabolic processes. It occurs when free water loss exceeds free water intake, usually due to exercise, disease, or high environmental temperature. Mild dehydration can also be caused by immersion diuresis, which may increase risk of decompression sickness in divers.
Most people can tolerate a 3-4% decrease in total body water without difficulty or adverse health effects. A 5-8% decrease can cause fatigue and dizziness. Loss of over 10% of total body water can cause physical and mental deterioration, accompanied by severe thirst. Death occurs at a loss of between 15 and 25% of the body water. Mild dehydration is characterized by thirst and general discomfort and is usually resolved with oral rehydration.
Dehydration can cause hypernatremia (high levels of sodium ions in the blood) and is distinct from hypovolemia (loss of blood volume, particularly blood plasma).
Signs and symptoms
The hallmarks of dehydration include thirst and neurological changes such as headaches, general discomfort, loss of appetite, nausea, decreased urine volume (unless polyuria is the cause of dehydration), confusion, unexplained tiredness, purple fingernails, and seizures. The symptoms of dehydration become increasingly severe with greater total body water loss. A body water loss of 1-2%, considered mild dehydration, is shown to impair cognitive performance. While in people over age 50, the body's thirst sensation diminishes with age, a study found that there was no difference in fluid intake between young and old people. Many older people have symptoms of dehydration. Dehydration contributes to morbidity in the elderly population, especially during conditions that promote insensible free water losses, such as hot weather. A Cochrane review on this subject defined water-loss dehydration as "people with serum osmolality of 295 mOsm/kg or more" and found that the main symptom in the elderly (peop
Document 3:::
Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra.
Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body.
Urine plays an important role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles.
Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high.
Physiology
Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration. The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body.
Duration
Research looking at the duration
Document 4:::
Drinking is the act of ingesting water or other liquids into the body through the mouth, proboscis, or elsewhere. Humans drink by swallowing, completed by peristalsis in the esophagus. The physiological processes of drinking vary widely among other animals.
Most animals drink water to maintain bodily hydration, although many can survive on the water gained from their food. Water is required for many physiological processes. Both inadequate and (less commonly) excessive water intake are associated with health problems.
Methods of drinking
In humans
When a liquid enters a human mouth, the swallowing process is completed by peristalsis which delivers the liquid through the esophagus to the stomach; much of the activity is abetted by gravity. The liquid may be poured from the hands or drinkware may be used as vessels. Drinking can also be performed by acts of inhalation, typically when imbibing hot liquids or drinking from a spoon. Infants employ a method of suction wherein the lips are pressed tight around a source, as in breastfeeding: a combination of breath and tongue movement creates a vacuum which draws in liquid.
In other land mammals
By necessity, terrestrial animals in captivity become accustomed to drinking water, but most free-roaming animals stay hydrated through the fluids and moisture in fresh food, and learn to actively seek foods with high fluid content. When conditions impel them to drink from bodies of water, the methods and motions differ greatly among species.
Cats, canines, and ruminants all lower the neck and lap in water with their powerful tongues. Cats and canines lap up water with the tongue in a spoon-like shape. Canines lap water by scooping it into their mouth with a tongue which has taken the shape of a ladle. However, with cats, only the tip of their tongue (which is smooth) touches the water, and then the cat quickly pulls its tongue back into its mouth which soon closes; this results in a column of liquid being pulled into the ca
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of drugs can increase water loss by interfering with the recapture of solutes and water from the forming urine?
A. diuretics
B. hallucinogens
C. disassociates
D. sedatives
Answer:
|
|
sciq-6954
|
multiple_choice
|
Heat and light are forms of what, which refers to the ability to do work?
|
[
"waves",
"fuel",
"energy",
"food"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Heat and light are forms of what, which refers to the ability to do work?
A. waves
B. fuel
C. energy
D. food
Answer:
|
|
sciq-7850
|
multiple_choice
|
Tissues of marine bony fishes gain what from their surroundings?
|
[
"toxins",
"mercury",
"absorb salts",
"excess salts"
] |
D
|
Relavent Documents:
Document 0:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 1:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 2:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
Document 3:::
Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety.
Education and training
According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians.
The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment.
As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines.
Relationship to commerce
Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED
Document 4:::
Ethnoichthyology is an area in anthropology that examines human knowledge of fish, the uses of fish, and importance of fish in different human societies. It draws on knowledge from many different areas including ichthyology, economics, oceanography, and marine botany.
This area of study seeks to understand the details of the interactions of humans with fish, including both cognitive and behavioural aspects. A knowledge of fish and their life strategies is extremely important to fishermen. In order to conserve fish species, it is also important to be aware of other cultures' knowledge of fish. Ignorance of the effects of human activity on fish populations may endanger fish species. Knowledge of fish can be gained through experience, scientific research, or information passed down through generations. Some factors that affect the amount of knowledge acquired include the value and abundance of the various types of fish, their usefulness in fisheries, and the amount of time one spends observing the fishes' life history patterns.
Etymology
The term was first used in the scientific literature by W.T. Morrill. He justified the origin and use of this term by stating that it arose from the model of "ethnobotany".
Importance in conservation
Ethnoichthyology can be very useful to the study and investigation of environmental changes caused by anthropogenic factors, such as the decline of fish stocks, the disappearance of fish species, and the introduction of non-native species of fish in certain environments. Ethnoichthyological knowledge can be used to create environmental conservation strategies. With a sound knowledge of fish ecology, informed decisions with respect to fishing practices can be made, and destructive environmental practices can be avoided. Ethnoichthyological knowledge can be the difference between conserving a species of fish, or placing a moratorium on fishing.
Newfoundland's cod fishery collapse
The collapse of the cod fishery in Newfoundland and Lab
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Tissues of marine bony fishes gain what from their surroundings?
A. toxins
B. mercury
C. absorb salts
D. excess salts
Answer:
|
|
scienceQA-10685
|
multiple_choice
|
How long is a bike path?
|
[
"5 centimeters",
"5 kilometers",
"5 millimeters",
"5 meters"
] |
B
|
The best estimate for the length of a bike path is 5 kilometers.
5 millimeters, 5 centimeters, and 5 meters are all too short.
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a bike path?
A. 5 centimeters
B. 5 kilometers
C. 5 millimeters
D. 5 meters
Answer:
|
sciq-2007
|
multiple_choice
|
What forms when the dna in the nucleus wraps around proteins?
|
[
"ribosomes",
"chromosomes",
"rna",
"genes"
] |
B
|
Relavent Documents:
Document 0:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 1:::
Eukaryotic chromosome structure refers to the levels of packaging from raw DNA molecules to the chromosomal structures seen during metaphase in mitosis or meiosis. Chromosomes contain long strands of DNA containing genetic information. Compared to prokaryotic chromosomes, eukaryotic chromosomes are much larger in size and are linear chromosomes. Eukaryotic chromosomes are also stored in the cell nucleus, while chromosomes of prokaryotic cells are not stored in a nucleus. Eukaryotic chromosomes require a higher level of packaging to condense the DNA molecules into the cell nucleus because of the larger amount of DNA. This level of packaging includes the wrapping of DNA around proteins called histones in order to form condensed nucleosomes.
History
The double helix was discovered in 1953 by James Watson and Francis Crick. Other researchers made very important, but unconnected findings about the composition of DNA. Ultimately it was Watson and Crick who put all of these findings together to come up with a model for DNA. Later, chemist Alexander Todd determined that the backbone of a DNA molecule contained repeating phosphate and deoxyribose sugar groups. The biochemist Erwin Chargaff found that adenine and thymine always paired while cytosine and guanine always paired. High resolution X-ray images of DNA that were obtained by Maurice Wilkins and Rosalind Franklin suggested a helical, or corkscrew like shape. Some of the first scientists to recognize the structures now known as chromosomes were Schleiden, Virchow, and Bütschli. The term was coined by Heinrich Wilhelm Gottfried von Waldeyer-Hartz, referring to the term chromatin, was introduced by Walther Flemming. Scientists also discovered plant and animal cells have a central compartment called the nucleus. They soon realized chromosomes were found inside the nucleus and contained different information for many different traits.
Structure
In eukaryotes, such as humans, roughly 3.2 billion nucleotides are spread out
Document 2:::
What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"
In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.
Background
The book, published i
Document 3:::
Topoisomers or topological isomers are molecules with the same chemical formula and stereochemical bond connectivities but different topologies. Examples of molecules for which there exist topoisomers include DNA, which can form knots, and catenanes. Each topoisomer of a given DNA molecule possesses a different linking number associated with it. DNA topoisomers can be interchanged by enzymes called topoisomerases. Using a topoisomerase along with an intercalator, topoisomers with different linking number may be separated on an agarose gel via gel electrophoresis.
See also
Mechanically-interlocked molecular architectures
Catenane
Rotaxanes
Molecular knot
Molecular Borromean rings
Document 4:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What forms when the dna in the nucleus wraps around proteins?
A. ribosomes
B. chromosomes
C. rna
D. genes
Answer:
|
|
sciq-11627
|
multiple_choice
|
What helps represent age-sex structure of the population?
|
[
"biome model",
"density graph",
"population pyramid",
"habitat chart"
] |
C
|
Relavent Documents:
Document 0:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 1:::
A population pyramid (age structure diagram) or "age-sex pyramid" is a graphical illustration of the distribution of a population (typically that of a country or region of the world) by age groups and sex; it typically takes the shape of a pyramid when the population is growing. Males are usually shown on the left and females on the right, and they may be measured in absolute numbers or as a percentage of the total population. The pyramid can be used to visualize the age of a particular population. It is also used in ecology to determine the overall age distribution of a population; an indication of the reproductive capabilities and likelihood of the continuation of a species. Number of people per unit area of land is called population density.
Structure
A population pyramid often contains continuous stacked-histogram bars, making it a horizontal bar diagram. The population size is shown on the x-axis (horizontal) while the age-groups are represented on the y-axis (vertical). The size of each bar can be displayed either as a percentage of the total population or as a raw number. Males are conventionally shown on the left and females on the right. Population pyramids are often viewed as the most effective way to graphically depict the age and distribution of a population, partly because of the very clear image these pyramids provide. A great deal of information about the population broken down by age and sex can be read from a population pyramid, and this can shed light on the extent of development and other aspects of the population.
The measures of central tendency (mean, median, and mode) should be considered when assessing a population pyramid. For example, the average age could be used to determine the type of population in a particular region. A population with an average age of 15 would be very young compared to one with an average age of 55. Population statistics are often mid-year numbers.
A series of population pyramids could give a clear picture of how
Document 2:::
In demography a Lexis diagram (named after economist and social scientist Wilhelm Lexis) is a two dimensional diagram used to represent events (such as births or deaths) that occur to individuals belonging to different cohorts. Calendar time is usually represented on the horizontal axis, while age is represented on the vertical axis. In some cases, the y-axis is plotted backwards, with age 0 at the top of the page and increasing downwards. Other arrangements of the axes are also seen, and some go back to Lexis himself. As an example the death of an individual in 2009 at age 80 is represented by the point (2009,80); the cohort of all persons born in 1929 is represented by a diagonal line starting at (1929,0) and continuing through (1930,1), (1931, 2), and so on.
Document 3:::
Outline of demography contains human demography and population related important concepts and high-level aggregated lists compiled in the useful categories.
The subheadings have been grouped by the following 4 categories:
Meta (lit. "highest" level) units, such as the universal important concepts related to demographics and places.
Macro (lit. "high" level) units where the "whole world" is the smallest unit of measurement, such as the aggregated summary demographics at global level. For example, United Nations.
Meso (lit. "middle" or "intermediate" level) units where the smallest unit of measurement cover more than one nation and more than one continent but not all the nations or continents. For example, summary list at continental level, e.g. Eurasia and Latin America or Middle East which cover two or more continents. Other examples include the intercontinental organisations e.g. the Commonwealth of Nations or the organisation of Arab states.
Micro (lit. "lower" or "smaller") level units where country is the smallest unit of measurement, such as the "globally aggregated lists" by the "individual countries" .
Please do not add sections on the items that are the nano (lit. "minor" or "tiny") level units as per the context described above, e.g. list of things within a city must be kept out.
Meta or important concepts
Global human population
World population
Demographics of the world
Fertility and intelligence
Human geography
Geographic mobility
Globalization
Human migration
List of lists on linguistics
Impact of human population
Human impact on the environment
Biological dispersal
Carrying capacity
Doomsday argument
Environmental migrant
Human overpopulation
Malthusian catastrophe
List of countries by carbon dioxide emissions
List of countries by carbon dioxide emissions per capita
List of countries by greenhouse gas emissions
List of countries by greenhouse gas emissions per capita
Overconsumption
Overexploitation
Population eco
Document 4:::
Demography (), also known as Demographics, is the statistical study of populations, especially human beings.
Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population. Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers; in population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points.
Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What helps represent age-sex structure of the population?
A. biome model
B. density graph
C. population pyramid
D. habitat chart
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.