id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-11013
|
multiple_choice
|
Nearly all life processes depend on what substance, which is involved in biochemical reactions?
|
[
"hydrocarbons",
"air",
"food",
"water"
] |
D
|
Relavent Documents:
Document 0:::
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Education
Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti
Document 1:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 2:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 3:::
Biochemists are scientists who are trained in biochemistry. They study chemical processes and chemical transformations in living organisms. Biochemists study DNA, proteins and cell parts. The word "biochemist" is a portmanteau of "biological chemist."
Biochemists also research how certain chemical reactions happen in cells and tissues and observe and record the effects of products in food additives and medicines.
Biochemist researchers focus on playing and constructing research experiments, mainly for developing new products, updating existing products and analyzing said products. It is also the responsibility of a biochemist to present their research findings and create grant proposals to obtain funds for future research.
Biochemists study aspects of the immune system, the expressions of genes, isolating, analyzing, and synthesizing different products, mutations that lead to cancers, and manage laboratory teams and monitor laboratory work. Biochemists also have to have the capabilities of designing and building laboratory equipment and devise new methods of producing correct results for products.
The most common industry role is the development of biochemical products and processes. Identifying substances' chemical and physical properties in biological systems is of great importance, and can be carried out by doing various types of analysis. Biochemists must also prepare technical reports after collecting, analyzing and summarizing the information and trends found.
In biochemistry, researchers often break down complicated biological systems into their component parts. They study the effects of foods, drugs, allergens and other substances on living tissues; they research molecular biology, the study of life at the molecular level and the study of genes and gene expression; and they study chemical reactions in metabolism, growth, reproduction, and heredity, and apply techniques drawn from biotechnology and genetic engineering to help them in their research. Abou
Document 4:::
The following outline is provided as an overview of and topical guide to biochemistry:
Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes.
Applications of biochemistry
Testing
Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans.
Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates.
Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population.
Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida.
PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation.
Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin.
Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Nearly all life processes depend on what substance, which is involved in biochemical reactions?
A. hydrocarbons
B. air
C. food
D. water
Answer:
|
|
sciq-10000
|
multiple_choice
|
Why does water infiltrate the ground?
|
[
"because soil and rocks are porous",
"run-off from flooding",
"prolonged drought conditions",
"gravity"
] |
A
|
Relavent Documents:
Document 0:::
FEFLOW (Finite Element subsurface FLOW system) is a computer program for simulating groundwater flow, mass transfer and heat transfer in porous media and fractured media. The program uses finite element analysis to solve the groundwater flow equation of both saturated and unsaturated conditions as well as mass and heat transport, including fluid density effects and chemical kinetics for multi-component reaction systems.
History
The software was firstly introduced by Hans-Jörg G. Diersch in 1979, see and. He developed the software in the Institute of Mechanics of the German Academy of Sciences Berlin up to 1990. In 1990 he was one of the founders of WASY GmbH of Berlin, Germany (the acronym WASY translates from German to Institute for Water Resources Planning and Systems Research), where FEFLOW has been developed further, continuously improved and extended as a commercial simulation package. In 2007 the shares of WASY GmbH were purchased by DHI. The WASY company has been fused and FEFLOW became part of the DHI Group software portfolio. FEFLOW is being further developed at DHI by an international team. Software distribution and services are worldwide.
Technology
The program is offered in both 32-bit and 64-bit versions for Microsoft Windows and Linux operating systems.
FEFLOW's theoretical basis is fully described in the comprehensive FEFLOW book. It covers a wide range of physical and computational issues in the field of porous/fractured-media modeling. The book starts with a more general theory for all relevant flow and transport phenomena on the basis of the continuum mechanics, systematically develops the basic framework for important classes of problems (e.g., multiphase/multispecies non-isothermal flow and transport phenomena, variably saturated porous media, free-surface groundwater flow, aquifer-averaged equations, discrete feature elements), introduces finite element methods for solving the basic multidimensional balance equations, in detail discusses a
Document 1:::
An earthflow (earth flow) is a downslope viscous flow of fine-grained materials that have been saturated with water and moves under the pull of gravity. It is an intermediate type of mass wasting that is between downhill creep and mudflow. The types of materials that are susceptible to earthflows are clay, fine sand and silt, and fine-grained pyroclastic material.
When the ground materials become saturated with enough water, they will start flowing (soil liquefaction). Its speed can range from being barely noticeable to rapid movement. The velocity of the flow is dictated by water content: the higher the water content is, the higher the velocity will be. Because of the dependency on water content for the velocity of the flow, it can take minutes or years for the materials to move down the slope.
Features and behavior
Earthflows are just one type of mass movement that can occur on a hill slope. It has been recognized as its own type of movement since the early 20th century. Earthflows are one of the most fluid types of mass movements. Earthflows occur on heavily saturated slopes like mudflows or a debris flow. Though earthflows are a lot like mudflows, overall they are slower and are covered with solid material carried along by flow from within. Earthflows are often made up of fine-grained materials so slopes consisting of clay and silt materials are more likely to create an earthflow.
As earthflows are usually water-dependent, the risk of one occurring is much higher in humid areas especially after a period of heavy rainfall or snowmelt. The high level of precipitation, which saturates the ground and adds water to the slope content, increases the pore-water pressure and reduces the shearing strength of the material. As the slope becomes wet, the earthflow may start as a creep downslope due to the clay or silt having less friction. As the material is increasingly more saturated, the slope will fail, which depends on slope stability. In earthflows, the slop
Document 2:::
Infiltration is the process by which water on the ground surface enters the soil. It is commonly used in both hydrology and soil sciences. The infiltration capacity is defined as the maximum rate of infiltration. It is most often measured in meters per day but can also be measured in other units of distance over time if necessary. The infiltration capacity decreases as the soil moisture content of soils surface layers increases. If the precipitation rate exceeds the infiltration rate, runoff will usually occur unless there is some physical barrier.
Infiltrometers, parameters and rainfall simulators are all devices that can be used to measure infiltration rates.
Infiltration is caused by multiple factors including; gravity, capillary forces, adsorption, and osmosis. Many soil characteristics can also play a role in determining the rate at which infiltration occurs.
Factors that affect infiltration
Precipitation
Precipitation can impact infiltration in many ways. The amount, type, and duration of precipitation all have an impact. Rainfall leads to faster infiltration rates than any other precipitation event, such as snow or sleet. In terms of amount, the more precipitation that occurs, the more infiltration will occur until the ground reaches saturation, at which point the infiltration capacity is reached. The duration of rainfall impacts the infiltration capacity as well. Initially when the precipitation event first starts the infiltration is occurring rapidly as the soil is unsaturated, but as time continues the infiltration rate slows as the soil becomes more saturated. This relationship between rainfall and infiltration capacity also determines how much runoff will occur. If rainfall occurs at a rate faster than the infiltration capacity runoff will occur.
Soil characteristics
The porosity of soils is critical in determining the infiltration capacity. Soils that have smaller pore sizes, such as clay, have lower infiltration capacity and slower infiltration
Document 3:::
In hydrology, bound water, is an extremely thin layer of water surrounding mineral surfaces.
Water molecules have a strong electrical polarity, meaning that there is a very strong positive charge on one side of the molecule and a strong negative charge on the other. This causes the water molecules to bond to each other and to other charged surfaces, such as soil minerals. Clay in particular has a high ability to bond with water molecules.
The strong attraction between these surfaces causes an extremely thin water film (a few molecules thick) to form on the mineral surface. These water molecules are much less mobile than the rest of the water in the soil, and have significant effects on soil dielectric permittivity and freezing-thawing.
In molecular biology and food science, bound water refers to the amount of water in body tissues which are bound to macromolecules or organelles. In food science this form of water is practically unavailable for microbiological activities so it would not cause quality decreases or pathogen increases.
See also
Adsorption
Capillary action
Effective porosity
Surface tension
Document 4:::
Groundwater is the water present beneath Earth's surface in rock and soil pore spaces and in the fractures of rock formations. About 30 percent of all readily available freshwater in the world is groundwater. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. The depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. Groundwater is recharged from the surface; it may discharge from the surface naturally at springs and seeps, and can form oases or wetlands. Groundwater is also often withdrawn for agricultural, municipal, and industrial use by constructing and operating extraction wells. The study of the distribution and movement of groundwater is hydrogeology, also called groundwater hydrology.
Typically, groundwater is thought of as water flowing through shallow aquifers, but, in the technical sense, it can also contain soil moisture, permafrost (frozen soil), immobile water in very low permeability bedrock, and deep geothermal or oil formation water. Groundwater is hypothesized to provide lubrication that can possibly influence the movement of faults. It is likely that much of Earth's subsurface contains some water, which may be mixed with other fluids in some instances.
Groundwater is often cheaper, more convenient and less vulnerable to pollution than surface water. Therefore, it is commonly used for public water supplies. For example, groundwater provides the largest source of usable water storage in the United States, and California annually withdraws the largest amount of groundwater of all the states. Underground reservoirs contain far more water than the capacity of all surface reservoirs and lakes in the US, including the Great Lakes. Many municipal water supplies are derived solely from groundwater. Over 2 billion people rely on it as their primary water source worldwide.
Human use of groundwater causes environmental prob
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Why does water infiltrate the ground?
A. because soil and rocks are porous
B. run-off from flooding
C. prolonged drought conditions
D. gravity
Answer:
|
|
sciq-1699
|
multiple_choice
|
What are ionic solutes?
|
[
"electrolytes",
"carbohydrates",
"salts",
"solvent"
] |
A
|
Relavent Documents:
Document 0:::
An ionic liquid (IL) is a salt in the liquid state. In some contexts, the term has been restricted to salts whose melting point is below a specific temperature, such as . While ordinary liquids such as water and gasoline are predominantly made of electrically neutral molecules, ionic liquids are largely made of ions. These substances are variously called liquid electrolytes, ionic melts, ionic fluids, fused salts, liquid salts, or ionic glasses.
Ionic liquids have many potential applications. They are powerful solvents and can be used as electrolytes. Salts that are liquid at near-ambient temperature are important for electric battery applications, and have been considered as sealants due to their very low vapor pressure.
Any salt that melts without decomposing or vaporizing usually yields an ionic liquid. Sodium chloride (NaCl), for example, melts at into a liquid that consists largely of sodium cations () and chloride anions (). Conversely, when an ionic liquid is cooled, it often forms an ionic solid—which may be either crystalline or glassy.
The ionic bond is usually stronger than the Van der Waals forces between the molecules of ordinary liquids. Because of these strong interactions, salts tend to have high lattice energies, manifested in high melting points. Some salts, especially those with organic cations, have low lattice energies and thus are liquid at or below room temperature. Examples include compounds based on the 1-ethyl-3-methylimidazolium (EMIM) cation and include: EMIM:Cl, EMIMAc (acetate anion), EMIM dicyanamide, ()()·, that melts at ; and 1-butyl-3,5-dimethylpyridinium bromide which becomes a glass below .
Low-temperature ionic liquids can be compared to ionic solutions, liquids that contain both ions and neutral molecules, and in particular to the so-called deep eutectic solvents, mixtures of ionic and non-ionic solid substances which have much lower melting points than the pure compounds. Certain mixtures of nitrate salts can have melt
Document 1:::
The ionic strength of a solution is a measure of the concentration of ions in that solution. Ionic compounds, when dissolved in water, dissociate into ions. The total electrolyte concentration in solution will affect important properties such as the dissociation constant or the solubility of different salts. One of the main characteristics of a solution with dissolved ions is the ionic strength. Ionic strength can be molar (mol/L solution) or molal (mol/kg solvent) and to avoid confusion the units should be stated explicitly. The concept of ionic strength was first introduced by Lewis and Randall in 1921 while describing the activity coefficients of strong electrolytes.
Quantifying ionic strength
The molar ionic strength, I, of a solution is a function of the concentration of all ions present in that solution.
where one half is because we are including both cations and anions, ci is the molar concentration of ion i (M, mol/L), zi is the charge number of that ion, and the sum is taken over all ions in the solution. For a 1:1 electrolyte such as sodium chloride, where each ion is singly-charged, the ionic strength is equal to the concentration. For the electrolyte MgSO4, however, each ion is doubly-charged, leading to an ionic strength that is four times higher than an equivalent concentration of sodium chloride:
Generally multivalent ions contribute strongly to the ionic strength.
Calculation example
As a more complex example, the ionic strength of a mixed solution 0.050 M in Na2SO4 and 0.020 M in KCl is:
Non-ideal solutions
Because in non-ideal solutions volumes are no longer strictly additive it is often preferable to work with molality b (mol/kg of H2O) rather than molarity c (mol/L). In that case, molal ionic strength is defined as:
in which
i = ion identification number
z = charge of ion
b = molality (mol solute per Kg solvent)
Importance
The ionic strength plays a central role in the Debye–Hückel theory that describes the strong deviations from id
Document 2:::
The use of ionic liquids in carbon capture is a potential application of ionic liquids as absorbents for use in carbon capture and sequestration. Ionic liquids, which are salts that exist as liquids near room temperature, are polar, nonvolatile materials that have been considered for many applications. The urgency of climate change has spurred research into their use in energy-related applications such as carbon capture and storage.
Carbon capture using absorption
Ionic liquids as solvents
Amines are the most prevalent absorbent in postcombustion carbon capture technology today. In particular, monoethanolamine (MEA) has been used in industrial scales in postcombustion carbon capture, as well as in other CO2 separations, such as "sweetening" of natural gas. However, amines are corrosive, degrade over time, and require large industrial facilities. Ionic liquids on the other hand, have low vapor pressures . This property results from their strong Coulombic attractive force. Vapor pressure remains low through the substance's thermal decomposition point (typically >300 °C). In principle, this low vapor pressure simplifies their use and makes them "green" alternatives. Additionally, it reduces risk of contamination of the CO2 gas stream and of leakage into the environment.
The solubility of CO2 in ionic liquids is governed primarily by the anion, less so by the cation. The hexafluorophosphate (PF6–) and tetrafluoroborate (BF4–) anions have been shown to be especially amenable to CO2 capture.
Ionic liquids have been considered as solvents in a variety of liquid-liquid extraction processes, but never commercialized. Beside that, ionic liquids have replaced the conventional volatile solvents in industry such as absorption of gases or extractive distillation. Additionally, ionic liquids are used as co-solutes for the generation of aqueous biphasic systems, or purification of biomolecules.
Process
A typical CO2 absorption process consists of a feed gas, an absorptio
Document 3:::
Ionic transfer is the transfer of ions from one liquid phase to another. This is related to the phase transfer catalysts which are a special type of liquid-liquid extraction which is used in synthetic chemistry.
For instance nitrate anions can be transferred between water and nitrobenzene. One way to observe this is to use a cyclic voltammetry experiment where the liquid-liquid interface is the working electrode. This can be done by placing secondary electrodes in each phase and close to interface each phase has a reference electrode. One phase is attached to a potentiostat which is set to zero volts, while the other potentiostat is driven with a triangular wave. This experiment is known as a polarised Interface between Two Immiscible Electrolyte Solutions (ITIES) experiment.
See also
Diffusion potential
Document 4:::
An electrolyte is a medium containing ions that is electrically conducting through the movement of those ions, but not conducting electrons. This includes most soluble salts, acids, and bases dissolved in a polar solvent, such as water. Upon dissolving, the substance separates into cations and anions, which disperse uniformly throughout the solvent. Solid-state electrolytes also exist. In medicine and sometimes in chemistry, the term electrolyte refers to the substance that is dissolved.
Electrically, such a solution is neutral. If an electric potential is applied to such a solution, the cations of the solution are drawn to the electrode that has an abundance of electrons, while the anions are drawn to the electrode that has a deficit of electrons. The movement of anions and cations in opposite directions within the solution amounts to a current. Some gases, such as hydrogen chloride (HCl), under conditions of high temperature or low pressure can also function as electrolytes. Electrolyte solutions can also result from the dissolution of some biological (e.g., DNA, polypeptides) or synthetic polymers (e.g., polystyrene sulfonate), termed "polyelectrolytes", which contain charged functional groups. A substance that dissociates into ions in solution or in the melt acquires the capacity to conduct electricity. Sodium, potassium, chloride, calcium, magnesium, and phosphate in a liquid phase are examples of electrolytes.
In medicine, electrolyte replacement is needed when a person has prolonged vomiting or diarrhea, and as a response to sweating due to strenuous athletic activity. Commercial electrolyte solutions are available, particularly for sick children (such as oral rehydration solution, Suero Oral, or Pedialyte) and athletes (sports drinks). Electrolyte monitoring is important in the treatment of anorexia and bulimia.
In science, electrolytes are one of the main components of electrochemical cells.
In clinical medicine, mentions of electrolytes usually refer m
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are ionic solutes?
A. electrolytes
B. carbohydrates
C. salts
D. solvent
Answer:
|
|
sciq-3724
|
multiple_choice
|
What is the process in which a solid changes directly to a gas without going through the liquid state called?
|
[
"sublimation",
"Diffusion",
"amplification",
"vaporization"
] |
A
|
Relavent Documents:
Document 0:::
Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation.
Applications
Examples
One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid.
Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition.
Industrial applications
There is an industrial coatings process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid. See also physical vapor deposition, which is a class of processes used to deposit thin films of various
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Sublimation is the transition of a substance directly from the solid to the gas state, without passing through the liquid state. Sublimation is an endothermic process that occurs at temperatures and pressures below a substance's triple point in its phase diagram, which corresponds to the lowest pressure at which the substance can exist as a liquid. The reverse process of sublimation is deposition or desublimation, in which a substance passes directly from a gas to a solid phase. Sublimation has also been used as a generic term to describe a solid-to-gas transition (sublimation) followed by a gas-to-solid transition (deposition). While vaporization from liquid to gas occurs as evaporation from the surface if it occurs below the boiling point of the liquid, and as boiling with formation of bubbles in the interior of the liquid if it occurs at the boiling point, there is no such distinction for the solid-to-gas transition which always occurs as sublimation from the surface.
At normal pressures, most chemical compounds and elements possess three different states at different temperatures. In these cases, the transition from the solid to the gaseous state requires an intermediate liquid state. The pressure referred to is the partial pressure of the substance, not the total (e.g. atmospheric) pressure of the entire system. Thus, any solid can sublimate if its vapour pressure is higher than the surrounding partial pressure of the same substance, and in some cases sublimates at an appreciable rate (e.g. water ice just below 0 °C). For some substances, such as carbon and arsenic, sublimation is much easier than evaporation from the melt, because the pressure of their triple point is very high, and it is difficult to obtain them as liquids.
The term sublimation refers to a physical change of state and is not used to describe the transformation of a solid to a gas in a chemical reaction. For example, the dissociation on heating of solid ammonium chloride into hydrogen chlori
Document 3:::
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos
Document 4:::
Vaporization (or vaporisation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: Evaporation and boiling. Evaporation is a surface phenomenon, where as boiling is a bulk phenomenon.
Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid.
Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment.
Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Because it does not involve the liquid phase, it is not a form of vaporization.
The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO.
At the moment o
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process in which a solid changes directly to a gas without going through the liquid state called?
A. sublimation
B. Diffusion
C. amplification
D. vaporization
Answer:
|
|
sciq-10117
|
multiple_choice
|
The energy for an ecosystem can come from sunlight or _________?
|
[
"chemical compounds",
"fossil fuels",
"radiation compounds",
"rain"
] |
A
|
Relavent Documents:
Document 0:::
Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science.
Definition
The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability".
Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include:
Variability: Many of the Earth System's natural 'modes' and variab
Document 1:::
Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order.
Examples: Industrialization, Biology
The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from elec
Document 2:::
The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy.
History
In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals.
The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge.
Mechanics
Photosynthesis
In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The energy for an ecosystem can come from sunlight or _________?
A. chemical compounds
B. fossil fuels
C. radiation compounds
D. rain
Answer:
|
|
sciq-1370
|
multiple_choice
|
What term is used to describe animals that excrete ammonia?
|
[
"xerophyte",
"ammonotelic",
"ammonstand",
"spirogyra"
] |
B
|
Relavent Documents:
Document 0:::
Ammonia solution, also known as ammonia water, ammonium hydroxide, ammoniacal liquor, ammonia liquor, aqua ammonia, aqueous ammonia, or (inaccurately) ammonia, is a solution of ammonia in water. It can be denoted by the symbols NH3(aq). Although the name ammonium hydroxide suggests an alkali with the composition , it is actually impossible to isolate samples of NH4OH. The ions and OH− do not account for a significant fraction of the total amount of ammonia except in extremely dilute solutions.
Basicity of ammonia in water
In aqueous solution, ammonia deprotonates a small fraction of the water to give ammonium and hydroxide according to the following equilibrium:
NH3 + H2O + OH−.
In a 1 M ammonia solution, about 0.42% of the ammonia is converted to ammonium, equivalent to pH = 11.63
because [] = 0.0042 M, [OH−] = 0.0042 M, [NH3] = 0.9958 M, and pH = 14 + log10[OH−] = 11.62. The base ionization constant is
Kb = = 1.77.
Saturated solutions
Like other gases, ammonia exhibits decreasing solubility in solvent liquids as the temperature of the solvent increases. Ammonia solutions decrease in density as the concentration of dissolved ammonia increases. At , the density of a saturated solution is 0.88 g/ml and contains 35.6% ammonia by mass, 308 grams of ammonia per litre of solution, and has a molarity of approximately 18 mol/L. At higher temperatures, the molarity of the saturated solution decreases and the density increases. Upon warming of saturated solutions, ammonia gas is released.
Applications
In contrast to anhydrous ammonia, aqueous ammonia finds few non-niche uses outside of cleaning agents.
Household cleaner
Diluted (1–3%) ammonia is also an ingredient of numerous cleaning agents, including many window cleaning formulas. Because aqueous ammonia is a gas dissolved in water, as the water evaporates from a window, the gas evaporates also, leaving the window streak-free.
In addition to use as an ingredient in cleansers with other cleansing ingredients,
Document 1:::
Metabolic wastes or excrements are substances left over from metabolic processes (such as cellular respiration) which cannot be used by the organism (they are surplus or toxic), and must therefore be excreted. This includes nitrogen compounds, water, CO2, phosphates, sulphates, etc. Animals treat these compounds as excretes. Plants have metabolic pathways which transforms some of them (primarily the oxygen compounds) into useful substances..
All the metabolic wastes are excreted in a form of water solutes through the excretory organs (nephridia, Malpighian tubules, kidneys), with the exception of CO2, which is excreted together with the water vapor throughout the lungs. The elimination of these compounds enables the chemical homeostasis of the organism.
Nitrogen wastes
The nitrogen compounds through which excess nitrogen is eliminated from organisms are called nitrogenous wastes () or nitrogen wastes. They are ammonia, urea, uric acid, and creatinine. All of these substances are produced from protein metabolism. In many animals, the urine is the main route of excretion for such wastes; in some, it is the feces.
Ammonotelism
Ammonotelism is the excretion of ammonia and ammonium ions. Ammonia (NH3) forms with the oxidation of amino groups.(-NH2), which are removed from the proteins when they convert into carbohydrates. It is a very toxic substance to tissues and extremely soluble in water. Only one nitrogen atom is removed with it. A lot of water is needed for the excretion of ammonia, about 0.5 L of water is needed per 1 g of nitrogen to maintain ammonia levels in the excretory fluid below the level in body fluids to prevent toxicity. Thus, the marine organisms excrete ammonia directly into the water and are called ammonotelic. Ammonotelic animals include crustaceans, platyhelminths, cnidarians, poriferans, echinoderms, and other aquatic invertebrates.
Ureotelism
The excretion of urea is called ureotelism. Land animals, mainly amphibians and mammals, convert
Document 2:::
Reactive nitrogen ("Nr"), also known as fixed nitrogen, refers to all forms of nitrogen present in the environment except for molecular nitrogen (). While nitrogen is an essential element for life on Earth, molecular nitrogen is comparatively unreactive, and must be converted to other chemical forms via nitrogen fixation before it can be used for growth. Common Nr species include nitrogen oxides (), ammonia (), nitrous oxide (), as well as the anion nitrate ().
Biologically, nitrogen is "fixed" mainly by the microbes (eg., Bacteria and Archaea) of the soil that fix into mainly but also other species. Legumes, a type of plant in the Fabacae family, are symbionts to some of these microbes that fix . is a building block to Amino acids and proteins amongst other things essential for life. However, just over half of all reactive nitrogen entering the biosphere is attributable to anthropogenic activity such as industrial fertilizer production. While reactive nitrogen is eventually converted back into molecular nitrogen via denitrification, an excess of reactive nitrogen can lead to problems such as eutrophication in marine ecosystems.
Reactive nitrogen compounds
In the environmental context, reactive nitrogen compounds include the following classes:
oxide gases: nitric oxide, nitrogen dioxide, nitrous oxide. Containing oxidized nitrogen, mainly the result of industrial processes and internal combustion engines.
anions: nitrate, nitrite. Nitrate is a common component of fertilizers, e.g. ammonium nitrate.
amine derivatives: ammonia and ammonium salts, urea. Containing reduced nitrogen, these compounds are components of fertilizers.
All of these compounds enter into the nitrogen cycle.
As a consequence, an excess of Nr can affect the environment relatively quickly. This also means that nitrogen-related problems need to be looked at in an integrated manner.
See also
Human impact on the nitrogen cycle
Document 3:::
There are several taxons named amphibia. These include:
Amphibia (class), classis Amphibia, the amphibians
Species
Species with the specific epithet 'amphibia'
Rorippa amphibia (R. amphibia), a plant
Persicaria amphibia (P. amphibia), a plant
Neritina amphibia (N. amphibia), a snail
Aranea amphibia (A. amphibia), a spider
See also
Amphibian (disambiguation)
Amphibia (disambiguation)
Document 4:::
Ammonium nonanoate is a nonsystemic, broad-spectrum contact herbicide that has no soil activity. It can be used for the suppression and control of weeds, including grasses, vines, underbrush, and annual/perennial plants, including moss, saplings, and tree suckers. Ammonium nonanoate is marketed as an aqueous solutions, at room temperature at its maximum concentration in water (40%). Solutions are colorless to pale yellow liquid with a slight fatty acid odor. It is stable in storage. Ammonium nonanoate exists as white crystals.
Ammonium nonanoate is made from ammonia and nonanoic acid, a carboxylic acid widely distributed in nature, mainly as derivatives (esters) in such foods as apples, grapes, cheese, milk, rice, beans, oranges, and potatoes and in many other nonfood sources.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term is used to describe animals that excrete ammonia?
A. xerophyte
B. ammonotelic
C. ammonstand
D. spirogyra
Answer:
|
|
sciq-3097
|
multiple_choice
|
What is a popular treatment for kidney failure?
|
[
"electrolysis",
"psychotherapy",
"dialysis",
"metastasis"
] |
C
|
Relavent Documents:
Document 0:::
TIME-ITEM is an ontology of Topics that describes the content of undergraduate medical education. TIME is an acronym for "Topics for Indexing Medical Education"; ITEM is an acronym for "Index de thèmes pour l’éducation médicale." Version 1.0 of the taxonomy has been released and the web application that allows users to work with it is still under development. Its developers are seeking more collaborators to expand and validate the taxonomy and to guide future development of the web application.
History
The development of TIME-ITEM began at the University of Ottawa in 2006. It was initially developed to act as a content index for a curriculum map being constructed there. After its initial presentation at the 2006 conference of the Canadian Association for Medical Education, early collaborators included the University of British Columbia, McMaster University and Queen's University.
Features
The TIME-ITEM ontology is unique in that it is designed specifically for undergraduate medical education. As such, it includes fewer strictly biomedical entries than other common medical vocabularies (such as MeSH or SNOMED CT) but more entries relating to the medico-social concepts of communication, collaboration, professionalism, etc.
Topics within TIME-ITEM are arranged poly-hierarchically, meaning any Topic can have more than one parent. Relationships are established based on the logic that learning about a Topic contributes to the learning of all its parent Topics.
In addition to housing the ontology of Topics, the TIME-ITEM web application can house multiple Outcome frameworks. All Outcomes, whether private Outcomes entered by single institutions or publicly available medical education Outcomes (such as CanMeds 2005) are hierarchically linked to one or more Topics in the ontology. In this way, the contribution of each Topic to multiple Outcomes is made explicit.
The structure of the XML documents exported from TIME-ITEM (which contain the hierarchy of Outco
Document 1:::
Alternative medicine degrees include academic degrees, first professional degrees, qualifications or diplomas issued by accredited and legally recognised academic institutions in alternative medicine or related areas, either human or animal.
Examples
Examples of alternative medicine degrees include:
Ayurveda - BSc, MSc, BAMC, MD(Ayurveda), M.S.(Ayurveda), Ph.D(Ayurveda)
Siddha medicine - BSMS, MD(Siddha), Ph.D(Siddha)
Acupuncture - BSc, LAc, DAc, AP, DiplAc, MAc
Herbalism - Acs, BSc, Msc.
Homeopathy - BSc, MSc, DHMs, BHMS, M.D. (HOM), PhD in homoeopathy
Naprapathy - DN
Naturopathic medicine - BSc, MSc, BNYS, MD (Naturopathy), ND, NMD
Oriental Medicine - BSc, MSOM, MSTOM, KMD (Korea), BCM (Hong Kong), MCM (Hong Kong), BChinMed (Hong Kong), MChinMed (Hong Kong), MD (Taiwan), MB (China), TCM-Traditional Chinese medicine master (China)
Osteopathy - BOst, BOstMed, BSc (Osteo), DipOsteo
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education).
Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research.
Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education.
Common techniques and evidence base
Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning".
Entry-level education
Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a popular treatment for kidney failure?
A. electrolysis
B. psychotherapy
C. dialysis
D. metastasis
Answer:
|
|
sciq-462
|
multiple_choice
|
Frequency and intensity are two measurable properties of what?
|
[
"troughs",
"heat",
"wave",
"lines"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance.
The SI unit of spatial frequency is the reciprocal metre (m-1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm).
In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes :
Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by
Visual perception
In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase.
Spatial-frequency theory
The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, a
Document 2:::
The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters.
The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere.
See also
Electronic filter — examples of transmission characteristics of electronic filters
Document 3:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 4:::
In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically
Unit of will be (unit of signal)2.
And the energy of a discrete-time signal x(n) is defined mathematically as
Relationship to energy in physics
Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other:
where Z represents the magnitude, in appropriate units of measure, of the load driven by the signal.
For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm,
which is equivalent to joules, the SI unit for energy as defined in the physical sciences.
Spectral energy density
Similarly, the spectral energy density of signal x(t) is
where X(f) is the Fourier transform of x(t).
For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Frequency and intensity are two measurable properties of what?
A. troughs
B. heat
C. wave
D. lines
Answer:
|
|
sciq-4759
|
multiple_choice
|
What forms when tectonic plates move above a hot spot?
|
[
"volcanic chain",
"volcanic setting",
"earthquake chain",
"volcanic system"
] |
A
|
Relavent Documents:
Document 0:::
Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region.
Geology
Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago.
Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago.
At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged.
Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum.
Today, the sea floor between these four islands is relatively shallow
Document 1:::
The plate theory is a model of volcanism that attributes all volcanic activity on Earth, even that which appears superficially to be anomalous, to the operation of plate tectonics. According to the plate theory, the principal cause of volcanism is extension of the lithosphere. Extension of the lithosphere is a function of the lithospheric stress field. The global distribution of volcanic activity at a given time reflects the contemporaneous lithospheric stress field, and changes in the spatial and temporal distribution of volcanoes reflect changes in the stress field. The main factors governing the evolution of the stress field are:
Changes in the configuration of plate boundaries.
Vertical motions.
Thermal contraction.
Lithospheric extension enables pre-existing melt in the crust and mantle to escape to the surface. If extension is severe and thins the lithosphere to the extent that the asthenosphere rises, then additional melt is produced by decompression upwelling.
Origins of the plate theory
Developed during the late 1960s and 1970s, plate tectonics provided an elegant explanation for most of the Earth's volcanic activity. At spreading boundaries where plates move apart, the asthenosphere decompresses and melts to form new oceanic crust. At subduction zones, slabs of oceanic crust sink into the mantle, dehydrate, and release volatiles which lower the melting temperature and give rise to volcanic arcs and back-arc extensions. Several volcanic provinces, however, do not fit this simple picture and have traditionally been considered exceptional cases which require a non-plate-tectonic explanation.
Just prior to the development of plate tectonics in the early 1960s, the Canadian Geophysicist John Tuzo Wilson suggested that chains of volcanic islands form from movement of the seafloor over relatively stationary hotspots in stable centres of mantle convection cells. In the early 1970s, Wilson's idea was revived by the American geophysicist W. Jason Morgan. In
Document 2:::
In geodynamics lower crustal flow is the mainly lateral movement of material within the lower part of the continental crust by a ductile flow mechanism. It is thought to be an important process during both continental collision and continental break-up.
Rheology
The tendency of the lower crust to flow is controlled by its rheology. Ductile flow in the lower crust is assumed to be controlled by the deformation of quartz and/or plagioclase feldspar as its composition is thought to be granodioritic to dioritic. With normal thickness continental crust and a normal geothermal gradient, the lower crust, below the brittle–ductile transition zone, exhibits ductile flow behaviour under geological strain rates. Factors that can vary this behaviour include: water content, thickness, heat flow and strain-rate.
Collisional belts
In some areas of continental collision, the lower part of the thickened crust that results is interpreted to flow laterally, such as in the Tibetan plateau, and the Altiplano in the Bolivian Andes.
Document 3:::
A mantle plume is a proposed mechanism of convection within the Earth's mantle, hypothesized to explain anomalous volcanism. Because the plume head partially melts on reaching shallow depths, a plume is often invoked as the cause of volcanic hotspots, such as Hawaii or Iceland, and large igneous provinces such as the Deccan and Siberian Traps. Some such volcanic regions lie far from tectonic plate boundaries, while others represent unusually large-volume volcanism near plate boundaries.
Concepts
Mantle plumes were first proposed by J. Tuzo Wilson in 1963 and further developed by W. Jason Morgan in 1971 and 1972. A mantle plume is posited to exist where super-heated material forms (nucleates) at the core-mantle boundary and rises through the Earth's mantle. Rather than a continuous stream, plumes should be viewed as a series of hot bubbles of material. Reaching the brittle upper Earth's crust they form diapirs. These diapirs are "hotspots" in the crust. In particular, the concept that mantle plumes are fixed relative to one another and anchored at the core-mantle boundary would provide a natural explanation for the time-progressive chains of older volcanoes seen extending out from some such hotspots, for example, the Hawaiian–Emperor seamount chain. However, paleomagnetic data show that mantle plumes can also be associated with Large Low Shear Velocity Provinces (LLSVPs) and do move relative to each other.
The current mantle plume theory is that material and energy from Earth's interior are exchanged with the surface crust in two distinct and largely independent convective flows:
as previously theorized and widely accepted, the predominant, steady state plate tectonic regime driven by upper mantle convection, mainly the sinking of cold plates of lithosphere back into the asthenosphere.
the punctuated, intermittently dominant mantle overturn regime driven by plume convection that carries heat upward from the core-mantle boundary in a narrow column. This secon
Document 4:::
Intraplate volcanism is volcanism that takes place away from the margins of tectonic plates. Most volcanic activity takes place on plate margins, and there is broad consensus among geologists that this activity is explained well by the theory of plate tectonics. However, the origins of volcanic activity within plates remains controversial.
Mechanisms
Mechanisms that have been proposed to explain intraplate volcanism include mantle plumes; non-rigid motion within tectonic plates (the plate model); and impact events. It is likely that different mechanisms accounts for different cases of intraplate volcanism.
Plume model
A mantle plume is a proposed mechanism of convection of abnormally hot rock within the Earth's mantle. Because the plume head partly melts on reaching shallow depths, a plume is often invoked as the cause of volcanic hotspots, such as Hawaii or Iceland, and large igneous provinces such as the Deccan and Siberian traps. Some such volcanic regions lie far from tectonic plate boundaries, while others represent unusually large-volume volcanism near plate boundaries.
The hypothesis of mantle plumes has required progressive hypothesis-elaboration leading to variant propositions such as mini-plumes and pulsing plumes.
Concepts
Mantle plumes were first proposed by J. Tuzo Wilson in 1963 and further developed by W. Jason Morgan in 1971. A mantle plume is posited to exist where hot rock nucleates at the core-mantle boundary and rises through the Earth's mantle becoming a diapir in the Earth's crust. In particular, the concept that mantle plumes are fixed relative to one another, and anchored at the core-mantle boundary, would provide a natural explanation for the time-progressive chains of older volcanoes seen extending out from some such hot spots, such as the Hawaiian–Emperor seamount chain. However, paleomagnetic data show that mantle plumes can be associated with Large Low Shear Velocity Provinces (LLSVPs) and do move.
Two largely independent convec
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What forms when tectonic plates move above a hot spot?
A. volcanic chain
B. volcanic setting
C. earthquake chain
D. volcanic system
Answer:
|
|
sciq-11448
|
multiple_choice
|
What are the "code words" of the genetic code?
|
[
"codons",
"nucleotides",
"polymers",
"lipids"
] |
A
|
Relavent Documents:
Document 0:::
In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned").
Terminology
The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules.
cDNA libraries
A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f
Document 1:::
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V)
Differences from the standard code
See also
List of genetic codes
Document 2:::
The central dogma of molecular biology is an explanation of the flow of genetic information within a biological system. It is often stated as "DNA makes RNA, and RNA makes protein", although this is not its original meaning. It was first stated by Francis Crick in 1957, then published in 1958:
He re-stated it in a Nature paper published in 1970: "The central dogma of molecular biology deals with the detailed residue-by-residue transfer of sequential information. It states that such information cannot be transferred back from protein to either protein or nucleic acid."
A second version of the central dogma is popular but incorrect. This is the simplistic DNA → RNA → protein pathway published by James Watson in the first edition of The Molecular Biology of the Gene (1965). Watson's version differs from Crick's because Watson describes a two-step (DNA → RNA and RNA → protein) process as the central dogma. While the dogma as originally stated by Crick remains valid today, Watson's version does not.
The dogma is a framework for understanding the transfer of sequence information between information-carrying biopolymers, in the most common or general case, in living organisms. There are 3 major classes of such biopolymers: DNA and RNA (both nucleic acids), and protein. There are conceivable direct transfers of information that can occur between these. The dogma classes these into 3 groups of 3: three general transfers (believed to occur normally in most cells), two special transfers (known to occur, but only under specific conditions in case of some viruses or in a laboratory), and four unknown transfers (believed never to occur). The general transfers describe the normal flow of biological information: DNA can be copied to DNA (DNA replication), DNA information can be copied into mRNA (transcription), and proteins can be synthesized using the information in mRNA as a template (translation). The special transfers describe: RNA being copied from RNA (RNA replication), D
Document 3:::
A codon table can be used to translate a genetic code into a sequence of amino acids. The standard genetic code is traditionally represented as an RNA codon table, because when proteins are made in a cell by ribosomes, it is messenger RNA (mRNA) that directs protein synthesis. The mRNA sequence is determined by the sequence of genomic DNA. In this context, the standard genetic code is referred to as translation table 1. It can also be represented in a DNA codon table. The DNA codons in such tables occur on the sense DNA strand and are arranged in a 5′-to-3′ direction. Different tables with alternate codons are used depending on the source of the genetic code, such as from a cell nucleus, mitochondrion, plastid, or hydrogenosome.
There are 64 different codons in the genetic code and the below tables; most specify an amino acid. Three sequences, UAG, UGA, and UAA, known as stop codons, do not code for an amino acid but instead signal the release of the nascent polypeptide from the ribosome. In the standard code, the sequence AUG—read as methionine—can serve as a start codon and, along with sequences such as an initiation factor, initiates translation. In rare instances, start codons in the standard code may also include GUG or UUG; these codons normally represent valine and leucine, respectively, but as start codons they are translated as methionine or formylmethionine.
The first table—the standard table—can be used to translate nucleotide triplets into the corresponding amino acid or appropriate signal if it is a start or stop codon. The second table, appropriately called the inverse, does the opposite: it can be used to deduce a possible triplet code if the amino acid is known. As multiple codons can code for the same amino acid, the International Union of Pure and Applied Chemistry's (IUPAC) nucleic acid notation is given in some instances.
Translation table 1
Standard RNA codon table
Inverse RNA codon table
Standard DNA codon table
Inverse DNA codon table
Document 4:::
DNA digital data storage is the process of encoding and decoding binary data to and from synthesized strands of DNA.
While DNA as a storage medium has enormous potential because of its high storage density, its practical use is currently severely limited because of its high cost and very slow read and write times.
In June 2019, scientists reported that all 16 GB of text from the English Wikipedia had been encoded into synthetic DNA. In 2021, scientists reported that a custom DNA data writer had been developed that was capable of writing data into DNA at 18 Mbps.
Encoding methods
Countless methods for encoding data in DNA are possible. The optimal methods are those that make economical use of DNA and protect against errors. If the message DNA is intended to be stored for a long period of time, for example, 1,000 years, it is also helpful if the sequence is obviously artificial and the reading frame is easy to identify.
Encoding text
Several simple methods for encoding text have been proposed. Most of these involve translating each letter into a corresponding "codon", consisting of a unique small sequence of nucleotides in a lookup table. Some examples of these encoding schemes include Huffman codes, comma codes, and alternating codes.
Encoding arbitrary data
To encode arbitrary data in DNA, the data is typically first converted into ternary (base 3) data rather than binary (base 2) data. Each digit (or "trit") is then converted to a nucleotide using a lookup table. To prevent homopolymers (repeating nucleotides), which can cause problems with accurate sequencing, the result of the lookup also depends on the preceding nucleotide. Using the example lookup table below, if the previous nucleotide in the sequence is T (thymine), and the trit is 2, the next nucleotide will be G (guanine).
Various systems may be incorporated to partition and address the data, as well as to protect it from errors. One approach to error correction is to regularly intersperse synchroniz
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the "code words" of the genetic code?
A. codons
B. nucleotides
C. polymers
D. lipids
Answer:
|
|
ai2_arc-45
|
multiple_choice
|
How long does it take for Earth to rotate on its axis seven times?
|
[
"one day",
"one week",
"one month",
"one year"
] |
B
|
Relavent Documents:
Document 0:::
Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise.
The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica.
Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE.
Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record. Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures.
This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around t
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Advanced Placement (AP) Physics 1 is a year-long introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester algebra-based university course in mechanics. Along with AP Physics 2, the first AP Physics 1 exam was administered in 2015.
In its first five years, AP Physics 1 covered forces and motion, conservation laws, waves, and electricity. As of 2021, AP Physics 1 includes mechanics topics only.
History
The heavily computational AP Physics B course served for four decades as the College Board's algebra-based offering. As part of the College Board's redesign of science courses, AP Physics B was discontinued; therefore, AP Physics 1 and 2 were created with guidance from the National Research Council and the National Science Foundation. The course covers material of a first-semester university undergraduate physics course offered at American universities that use best practices of physics pedagogy. The first AP Physics 1 classes had begun in the 2014–2015 school year, with the first AP exams administered in May 2015.
Curriculum
AP Physics 1 is an algebra-based, introductory college-level physics course that includes mechanics topics such as motion, force, momentum, energy, harmonic motion, and rotation; The College Board published a curriculum framework that includes seven big ideas on which the AP Physics 1 and 2 courses are based, along with "enduring understandings" students are expected to acquire within each of the big ideas.:
Questions for the exam are constructed with direct reference to items in the curriculum framework. Student understanding of each topic is tested with reference to multiple skills—that is, questions require students to use quantitative, semi-quantitative, qualitative, and experimental reasoning in each content area.
Exam
Science Practices Assessed
Multiple Choice and Free Response Sections of the AP® Physics 1 exam are also assessed on scientific prac
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long does it take for Earth to rotate on its axis seven times?
A. one day
B. one week
C. one month
D. one year
Answer:
|
|
sciq-10410
|
multiple_choice
|
Gas, liquid, and solid describe what property of matter?
|
[
"states",
"Chemical",
"Physical",
"Quatitative"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions.
Low-energy states of matter
Classical states
Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other.
Amorphous solid: A solid in which there is no far-range order of the positions of the atoms.
Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order.
Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom.
Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern.
Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure.
Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order.
Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container.
Modern states
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
A characteristic property is a chemical or physical property that helps identify and classify substances. The characteristic properties of a substance are always the same whether the sample being observed is large or small. Thus, conversely, if the property of a substance changes as the sample size changes, that property is not a characteristic property. Examples of physical properties that are not characteristic properties are mass and volume. Examples of characteristic properties include melting points, boiling points, density, viscosity, solubility, crystal shape, and color. Substances with characteristic properties can be separated. For example, in fractional distillation, liquids are separated using the boiling point. The water Boiling point is 212 degrees Fahrenheit.
Identifying a substance
Every characteristic property is unique to one given substance. Scientists use characteristic properties to identify unknown substances. However, characteristic properties are most useful for distinguishing between two or more substances, not identifying a single substance. For example, isopropanol and water can be distinguished by the characteristic property of odor. Characteristic properties are used because the sample size and the shape of the substance does not matter. For example, 1 gram of lead is the same color as 100 tons of lead.
See also
Intensive and extensive properties
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Gas, liquid, and solid describe what property of matter?
A. states
B. Chemical
C. Physical
D. Quatitative
Answer:
|
|
sciq-8509
|
multiple_choice
|
Earth is the only planet in the solar system that has what element, which is essential for human life, present in all three of its states?
|
[
"oxygen",
"water",
"helium",
"carbon"
] |
B
|
Relavent Documents:
Document 0:::
Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS).
Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism.
Characteristics
Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
Planetary oceanography also called astro-oceanography or exo-oceanography is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of diamond in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for subsurface water oceans' existence elsewhere in t
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Earth is the only planet in the solar system that has what element, which is essential for human life, present in all three of its states?
A. oxygen
B. water
C. helium
D. carbon
Answer:
|
|
sciq-10712
|
multiple_choice
|
A catalyst decreases the amount of what resource that is required in order to begin a chemical reaction?
|
[
"positive energy",
"motion energy",
"activation energy",
"kinetic energy"
] |
C
|
Relavent Documents:
Document 0:::
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
Document 1:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
Document 2:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
Document 3:::
Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity).
There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified.
Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion.
Assumptions
The following assumptions are made:
The following chemical reaction takes place:
,
where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction.
Batch reaction assumes all reactants are added at the beginning.
Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch.
Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state.
Conversion
Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant.
Instantaneous conversion
Semi-batch
In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to
the amount fed at any point in time:
.
with as the change of moles with time of species i.
This ratio can become larger than 1. It can be used to indicate whether reservoirs are built
up and it is ideally close to 1. When the feed stops, its value is not defined.
In semi-batch polymerisation,
Document 4:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A catalyst decreases the amount of what resource that is required in order to begin a chemical reaction?
A. positive energy
B. motion energy
C. activation energy
D. kinetic energy
Answer:
|
|
sciq-2811
|
multiple_choice
|
What compounds, which serve as fuels and are used in manufacturing, are called the driving force of western civilization?
|
[
"gas",
"hydrocarbons",
"forests",
"fossils"
] |
B
|
Relavent Documents:
Document 0:::
Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound.
The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields.
Document 1:::
Bisnorhopanes (BNH) are a group of demethylated hopanes found in oil shales across the globe and can be used for understanding depositional conditions of the source rock. The most common member, 28,30-bisnorhopane, can be found in high concentrations in petroleum source rocks, most notably the Monterey Shale, as well as in oil and tar samples. 28,30-Bisnorhopane was first identified in samples from the Monterey Shale Formation in 1985. It occurs in abundance throughout the formation and appears in stratigraphically analogous locations along the California coast. Since its identification and analysis, 28,30-bisnorhopane has been discovered in oil shales around the globe, including lacustrine and offshore deposits of Brazil, silicified shales of the Eocene in Gabon, the Kimmeridge Clay Formation in the North Sea, and in Western Australian oil shales.
Chemistry
28,30-bisnorhopane exists in three epimers: 17α,18α21β(H), 17β,18α,21α(H), and 17β,18α,21β(H). During GC-MS, the three epimers coelute at the same time and are nearly indistinguishable. However, mass spectral fragmentation of the 28,30-bisnorhopane is predominantly characterized by m/z 191, 177, and 163. The ratios of 163/191 fragments can be used to distinguish the epimers, where the βαβ orientation has the highest, m/z 163/191 ratio. Further, the D/E ring ratios can be used to create a hierarchy of epimer maturity. From this, it is believed that the ααβ epimer is the first-formed, diagenetically, supported also by its percent dominance in younger shales. 28,30-bisnorhopane is created independently from kerogen, instead derived from bitumen, unbound as free oil-hydrocarbons. As such, as oil generation increases with source maturation, the concentration of 28,30-bisnorhopane decreases. Bisnorhopane may not be a reliable diagnostic for oil maturity due to microbial biodegradation.
Nomenclature
Norhopanes are a family of demethylated hopanes, identical to the methylated hopane structure, minus indicated desmet
Document 2:::
Biodesulfurization is the process of removing sulfur from crude oil through the use of microorganisms or their enzymes.
Background
Crude oil contains sulfur in its composition, with the latter being the most abundant element after carbon and hydrogen. Depending on its source, the amount of sulfur present in crude oil can range from 0.05 to 10%. Accordingly, the oil can be classified as sweet or sour if the sulfur concentration is below or above 0.5%, respectively.
The combustion of crude oil releases sulfur oxides (SOx) to the atmosphere, which are harmful to public health and contribute to serious environmental effects such as air pollution and acid rains. In addition, the sulfur content in crude oil is a major problem for refineries, as it promotes the corrosion of the equipment and the poisoning of the noble metal catalysts. The levels of sulfur in any oil field are too high for the fossil fuels derived from it (such as gasoline, diesel, or jet fuel ) to be used in combustion engines without pre-treatment to remove organosulfur compounds.
The reduction of the concentration of sulfur in crude oil becomes necessary to mitigate one of the leading sources of the harmful health and environmental effects caused by its combustion. In this sense, the European union has taken steps to decrease the sulfur content in diesel below 10 ppm, while the US has made efforts to restrict the sulfur content in diesel and gasoline to a maximum of 15 ppm. The reduction of sulfur compounds in oil fuels can be achieved by a process named desulfurization. Methods used for desulfurization include, among others, hydrodesulfurization, oxidative desulfurization, extractive desulfurization, and extraction by ionic liquids.
Despite their efficiency at reducing sulfur content, the conventional desulfurization methods are still accountable for a significant amount of the CO2 emissions associated with the crude oil refining process, releasing up to 9000 metric tons per year. Furthermore, the
Document 3:::
Roland Geyer is professor of industrial ecology at the Bren School of Environmental Science and Management, University of California at Santa Barbara. He is a specialist in the ecological impact of plastics.
In March 2021, Geyer wrote in The Guardian that humanity should ban fossil fuels, just at it had earlier banned tetraethyllead (TEL) and chlorofluorocarbons (CFC).
Document 4:::
Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order.
Examples: Industrialization, Biology
The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from elec
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What compounds, which serve as fuels and are used in manufacturing, are called the driving force of western civilization?
A. gas
B. hydrocarbons
C. forests
D. fossils
Answer:
|
|
sciq-8611
|
multiple_choice
|
What happens when an increase in temperature of a gas in a rigid container happens?
|
[
"container shrinks",
"gas explodes",
"pressure increases",
"pressure decreases"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Document 2:::
Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization.
There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes.
Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria.
Boiling water is also used in several cooking methods including boiling, steaming, and poaching.
Types
Free convection
The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point.
Nucleate
Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature.
An irregular surface of the boiling
Document 3:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 4:::
In statistical mechanics and condensed matter physics, the Kovacs effect is a kind of memory effect in glassy systems below the glass-transition temperature. A.J. Kovacs observed that a system’s state out of equilibrium is defined not only by its macro thermodynamical variables, but also by the inner parameters of the system. In the original effect, in response to a temperature change, under constant pressure, the isobaric volume and free energy of the system experienced a recovery characterized by non-monotonic departure from equilibrium, whereas all other thermodynamical variables were in their equilibrium values. It is considered a memory effect since the relaxation dynamics of the system depend on its thermal and mechanical history.
The effect was discovered by Kovacs in the 1960s in polyvinyl acetate. Since then, the Kovacs effect has been established as a very general phenomenon that comes about in a large variety of systems, model glasses,
tapped dense granular matter, spin-glasses, molecular liquids, granular gases, active matter, disordered mechanical systems, protein molecules, and more.
The effect in Kovacs’ experiments
Kovacs’ experimental procedure on polyvinyl acetate consisted of two main stages. In the first step, the sample is instantaneously quenched from a high initial temperature to a low reference temperature , under constant pressure. The time-dependent volume of the system in , , is recorded, until the time when the system is considered to be at equilibrium. The volume at is defined as the equilibrium volume of the system at temperature :
In the second step, the sample is quenched again from to a temperature that is lower than , so that . But now, the system is held at temperature only until the time when its volume reaches the equilibrium value of , meaning .
Then, the temperature is raised instantaneously to , so both the temperature and the volume agree with the same equilibrium state. Naively, one expects that nothing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens when an increase in temperature of a gas in a rigid container happens?
A. container shrinks
B. gas explodes
C. pressure increases
D. pressure decreases
Answer:
|
|
sciq-10637
|
multiple_choice
|
How can bacterial stis usually be cured?
|
[
"antiinflammatories",
"antivirals",
"with antibiotics",
"tylenol"
] |
C
|
Relavent Documents:
Document 0:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
Document 1:::
Staphylococcus aureus is a Gram-positive spherically shaped bacterium, a member of the Bacillota, and is a usual member of the microbiota of the body, frequently found in the upper respiratory tract and on the skin. It is often positive for catalase and nitrate reduction and is a facultative anaerobe that can grow without the need for oxygen. Although S. aureus usually acts as a commensal of the human microbiota, it can also become an opportunistic pathogen, being a common cause of skin infections including abscesses, respiratory infections such as sinusitis, and food poisoning. Pathogenic strains often promote infections by producing virulence factors such as potent protein toxins, and the expression of a cell-surface protein that binds and inactivates antibodies. S. aureus is one of the leading pathogens for deaths associated with antimicrobial resistance and the emergence of antibiotic-resistant strains, such as methicillin-resistant S. aureus (MRSA), is a worldwide problem in clinical medicine. Despite much research and development, no vaccine for S. aureus has been approved.
An estimated 21% to 30% of the human population are long-term carriers of S. aureus, which can be found as part of the normal skin microbiota, in the nostrils, and as a normal inhabitant of the lower reproductive tract of females. S. aureus can cause a range of illnesses, from minor skin infections, such as pimples, impetigo, boils, cellulitis, folliculitis, carbuncles, scalded skin syndrome, and abscesses, to life-threatening diseases such as pneumonia, meningitis, osteomyelitis, endocarditis, toxic shock syndrome, bacteremia, and sepsis. It is still one of the five most common causes of hospital-acquired infections and is often the cause of wound infections following surgery. Each year, around 500,000 hospital patients in the United States contract a staphylococcal infection, chiefly by S. aureus. Up to 50,000 deaths each year in the U.S. are linked to staphylococcal infection.
History
Document 2:::
Clostridioides difficile infection
(CDI or C-diff), also known as Clostridium difficile infection, is a symptomatic infection due to the spore-forming bacterium Clostridioides difficile. Symptoms include watery diarrhea, fever, nausea, and abdominal pain. It makes up about 20% of cases of antibiotic-associated diarrhea. Antibiotics can contribute to detrimental changes in gut microbiota; specifically, they decrease short-chain fatty acid absorption which results in osmotic, or watery, diarrhea. Complications may include pseudomembranous colitis, toxic megacolon, perforation of the colon, and sepsis.
Clostridioides difficile infection is spread by bacterial spores found within feces. Surfaces may become contaminated with the spores with further spread occurring via the hands of healthcare workers. Risk factors for infection include antibiotic or proton pump inhibitor use, hospitalization, hypoalbuminemia, other health problems, and older age. Diagnosis is by stool culture or testing for the bacteria's DNA or toxins. If a person tests positive but has no symptoms, the condition is known as C. difficile colonization rather than an infection.
Prevention efforts include terminal room cleaning in hospitals, limiting antibiotic use, and handwashing campaigns in hospitals. Alcohol based hand sanitizer does not appear effective. Discontinuation of antibiotics may result in resolution of symptoms within three days in about 20% of those infected.
The antibiotics metronidazole, vancomycin, or fidaxomicin, will cure the infection. Retesting after treatment, as long as the symptoms have resolved, is not recommended, as a person may often remain colonized. Recurrences have been reported in up to 25% of people. Some tentative evidence indicates fecal microbiota transplantation and probiotics may decrease the risk of recurrence.
C. difficile infections occur in all areas of the world. About 453,000 cases occurred in the United States in 2011, resulting in 29,000 deaths. Glob
Document 3:::
There are many circumstances during dental treatment where antibiotics are prescribed by dentists to prevent further infection (e.g. post-operative infection). The most common antibiotic prescribed by dental practitioners is penicillin in the form of amoxicillin, however many patients are hypersensitive to this particular antibiotic. Therefore, in the cases of allergies, erythromycin is used instead.
Indications for antibiotic use
Antibiotics should only be used for oral infections where there is evidence of spreading infection (cellulitis, lymph node involvement, swelling) or systemic involvement (fever, malaise), and where drainage or debridement is impossible. There are a limited number of localized oral lesions that are indicated for antibiotic use and these include periodontal abscess, acute necrotizing ulcerative gingivitis, and pericoronitis. A periapical abscess responds well to antibiotics if chewing gum is used during the first two half-lives of each dose (caution: overzealous mastication may result in muscle pain).
Another condition in which antibiotics are indicated is staphylococcal mucositis and it is mostly found in immunocompromised patients and the elderly. Patients will experience oral discomfort, mucosal inflammation and mucosal bleeding. The common treatment for this type of infection is oral lavages and flucloxacillin.
Post-operative Infections
Bacteraemia
Bacteraemia is a condition in which bacteria are present in the blood and may cause disease, including systemic disease such as infective endocarditis. Some dental treatments may cause bacteraemia, such as tooth extractions, subgingival scaling or even simple aggressive tooth brushing by patients.
Infective Endocarditis
If the bacteria involved in the bacteraemia reach the cardiac tissue, infective (or bacterial) endocarditis can develop, with fatal outcomes. Infective endocarditis is an infection of the endothelium lining of the heart. Infective endocarditis is known to dentists as
Document 4:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How can bacterial stis usually be cured?
A. antiinflammatories
B. antivirals
C. with antibiotics
D. tylenol
Answer:
|
|
sciq-2300
|
multiple_choice
|
Which effect causes winds to strike the polar front at an angle?
|
[
"centrifugal effect",
"coriolis effect",
"axial tilt",
"Lake Effect"
] |
B
|
Relavent Documents:
Document 0:::
In fluid dynamics, a secondary circulation or secondary flow is a weak circulation that plays a key maintenance role in sustaining a stronger primary circulation that contains most of the kinetic energy and momentum of a flow. For example, a tropical cyclone's primary winds are tangential (horizontally swirling), but its evolution and maintenance against friction involves an in-up-out secondary circulation flow that is also important to its clouds and rain. On a planetary scale, Earth's winds are mostly east–west or zonal, but that flow is maintained against friction by the Coriolis force acting on a small north–south or meridional secondary circulation.
See also
Hough function
Primitive equations
Secondary flow
Document 1:::
Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well.
The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance.
Atmospheric motions
Coriolis force
When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as:
where
is the flow velocity
is the planet's angular velocity vector
This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotat
Document 2:::
Spindrift (more rarely spoondrift) is the spray blown from cresting waves during a gale. This spray, which "drifts" in the direction of the gale, is one of the characteristics of a wind speed of 8 Beaufort and higher at sea. In Greek and Roman mythology, Leucothea was the goddess of spindrift.
Terminology
Spindrift is derived from the Scots language, but its further etymology is uncertain. Although the Oxford English Dictionary suggests it is a variant of spoondrift based on the way that word was pronounced in southwest Scotland, from spoon or spoom ("to sail briskly with the wind astern, with or without sails hoisted") and drift ("a mass of matter driven or forced onward together in a body, etc., especially by wind or water"), this is doubted by the because spoondrift is attested later than spindrift and it seems unlikely that the Scots spelling would have superseded the English one, and because the early use of the word in the form spenedrift by James Melville (1556–1614) is unlikely to have derived from spoondrift. In any case, spindrift was popularized in England through its use in the novels of the Scottish-born author William Black (1841–1898).
In the 1940s U.S. Navy, spindrift and spoondrift appear to have been used for different phenomena, as in the following record by the captain of the : "Visibility – which had been fair on the surface after moonrise – was now exceedingly poor due to spoondrift. Would that it were only the windblown froth of spindrift rather than the wind-driven cloudburst of water lashing the periscope exit eyepiece."
Spindrift or spoondrift is also used to refer to fine sand or snow that is blown off the ground by the wind.
Document 3:::
In geography and seamanship, windward () and leeward () are directions relative to the wind. Windward is upwind from the point of reference, i.e., towards the direction from which the wind is coming; leeward is downwind from the point of reference, i.e., along the direction towards which the wind is going.
The side of a ship that is towards the leeward is its "lee side". If the vessel is heeling under the pressure of crosswind, the lee side will be the "lower side". During the Age of Sail, the term weather was used as a synonym for windward in some contexts, as in the weather gage.
Since it captures rainfall, the windward side of a mountain tends to be wetter than the leeward side it blocks. The drier leeward area is said to be in a rain shadow.
Origin
The term "lee" comes from the middle-low German word // meaning "where the sea is not exposed to the wind" or "mild". The terms Luv and Lee (engl. Windward and Leeward) have been in use since the 17th century.
Usage
Windward and leeward directions (and the points of sail they create) are important factors to consider in such wind-powered or wind-impacted activities as sailing, wind-surfing, gliding, hang-gliding, and parachuting. Other terms with broadly the same meaning are widely used, particularly upwind and downwind.
Nautical
Among sailing craft, the windward vessel is normally the more maneuverable. For this reason, rule 12 of the International Regulations for Preventing Collisions at Sea, applying to sailing vessels, stipulates that where two are sailing in similar directions in relation to the wind, the windward vessel gives way to the leeward vessel.
Naval warfare
In naval warfare during the Age of Sail, a vessel always sought to use the wind to its advantage, maneuvering if possible to attack from windward. This was particularly important for less maneuverable square-rigged warships, which had limited ability to sail upwind, and sought to "hold the weather gage" entering battle.
This was particula
Document 4:::
A Wind generated current is a flow in a body of water that is generated by wind friction on its surface. Wind can generate surface currents on water bodies of any size. The depth and strength of the current depend on the wind strength and duration, and on friction and viscosity losses, but are limited to about 400 m depth by the mechanism, and to lesser depths where the water is shallower. The direction of flow is influenced by the Coriolis effect, and is offset to the right of the wind direction in the Northern Hemisphere, and to the left in the Southern Hemisphere. A wind current can induce secondary water flow in the form of upwelling and downwelling, geostrophic flow, and western boundary currents.
Mechanism
Friction between wind and the upper surface of a body of water will drag the water surface along with the wind The surface layer will exert viscous drag on the water just below, which will transfer some of the momentum. This process continues downward, with a continuous reduction in speed of flow with increasing depth as the energy is dissipated. The inertial effect of planetary rotation causes an offset of flow direction with increasing depth to the right in the northern hemisphere and to the left in the southern hemisphere. The mechanism of deflection is called the Coriolis effect, and the variation of flow velocity with depth is called an Ekman spiral. The effect varies with latitude, being very weak at the equator and increasing in strength with latitude. The resultant flow of water caused by this mechanism is known as Ekman transport.
A steady wind blowing across a long fetch in deep water for long enough to establish a steady state flow causes the surface water to move at 45° to the wind direction. The variation in flow direction with depth has the water moving perpendicular to wind direction by about 100 to 150 m depth, and flow speed drops to about 4% of surface flow speed by the depth of about 330 to 400 m where the flow direction is opposite to
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which effect causes winds to strike the polar front at an angle?
A. centrifugal effect
B. coriolis effect
C. axial tilt
D. Lake Effect
Answer:
|
|
sciq-495
|
multiple_choice
|
What is the term for the gas in smog that can damage plants?
|
[
"sulphur",
"carbon",
"dioxide",
"ozone"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that increase the surface area available for adsorption (which is not the same as absorption) or chemical reactions. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active.
Due to its high degree of microporosity, one gram of activated carbon has a surface area in excess of as determined by gas adsorption. Charcoal, before activation, has a specific surface area in the range of . An activation level sufficient for useful application may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties.
Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being 'activated'. When derived from coal it is referred to as activated coal. Activated coke is derived from coke.
Uses
Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications.
Industrial
One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the gas in smog that can damage plants?
A. sulphur
B. carbon
C. dioxide
D. ozone
Answer:
|
|
sciq-10342
|
multiple_choice
|
Most people can survive only a few days without what essential substance?
|
[
"carbon",
"water",
"food",
"air"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most people can survive only a few days without what essential substance?
A. carbon
B. water
C. food
D. air
Answer:
|
|
sciq-4228
|
multiple_choice
|
Difference in electric potential energy are measured in what basic unit?
|
[
"knots",
"volts",
"moles",
"watts"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
In chemistry, the electrochemical equivalent (Eq or Z) of a chemical element is the mass of that element (in grams) transported by a specific quantity of electricity, usually expressed in grams per coulomb of electric charge. The electrochemical equivalent of an element is measured with a voltameter.
Definition
The electrochemical equivalent of a substance is the mass of the substance deposited to one of the electrodes when a current of 1 ampere is passed for 1 second, i.e. a quantity of electricity of one coulomb is passed.
The formula for finding electrochemical equivalent is as follows:
where is the mass of substance and is the charge passed. Since , where is the current applied and is time, we also have
Eq values of some elements in kg/C
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Difference in electric potential energy are measured in what basic unit?
A. knots
B. volts
C. moles
D. watts
Answer:
|
|
sciq-3343
|
multiple_choice
|
Acne results from a blockage of sebaceous glands by what?
|
[
"progesterone",
"mucous",
"fat",
"sebum"
] |
D
|
Relavent Documents:
Document 0:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Acne results from a blockage of sebaceous glands by what?
A. progesterone
B. mucous
C. fat
D. sebum
Answer:
|
|
sciq-5438
|
multiple_choice
|
Bile salts produced by the liver assist in breaking apart what kind of fats?
|
[
"soluble",
"carbohydrates",
"dietary",
"Sugar"
] |
C
|
Relavent Documents:
Document 0:::
Fat globules (also known as mature lipid droplets) are individual pieces of intracellular fat in human cell biology. The lipid droplet's function is to store energy for the organism's body and is found in every type of adipocytes. They can consist of a vacuole, droplet of triglyceride, or any other blood lipid, as opposed to fat cells in between other cells in an organ. They contain a hydrophobic core and are encased in a phospholipid monolayer membrane. Due to their hydrophobic nature, lipids and lipid digestive derivatives must be transported in the globular form within the cell, blood, and tissue spaces.
The formation of a fat globule starts within the membrane bilayer of the endoplasmic reticulum. It starts as a bud and detaches from the ER membrane to join other droplets. After the droplets fuse, a mature droplet (full-fledged globule) is formed and can then partake in neutral lipid synthesis or lipolysis.
Globules of fat are emulsified in the duodenum into smaller droplets by bile salts during food digestion, speeding up the rate of digestion by the enzyme lipase at a later point in digestion. Bile salts possess detergent properties that allow them to emulsify fat globules into smaller emulsion droplets, and then into even smaller micelles. This increases the surface area for lipid-hydrolyzing enzymes to act on the fats.
Micelles are roughly 200 times smaller than fat emulsion droplets, allowing them to facilitate the transport of monoglycerides and fatty acids across the surface of the enterocyte, where absorption occurs.
Milk fat globules (MFGs) are another form of intracellular fat found in the mammary glands of female mammals. Their function is to provide enriching glycoproteins from the female to their offspring. They are formed in the endoplasmic reticulum found in the mammary epithelial lactating cell. The globules are made up of triacylglycerols encased in cellular membranes and proteins like adipophilin and TIP 47. The proteins are spread througho
Document 1:::
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond.
A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation.
Composition of common fats
In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography.
The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component.
Chemistry and nutrition
Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective.
Examples
Document 2:::
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids.
By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol.
Applications
Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel.
See also
Lipids
Simple lipid
Document 3:::
Bile (from Latin bilis), or gall, is a yellow-green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, produced continuously by the liver, and stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of their small intestine.
Composition
In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings.
Function
Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans.
The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food wou
Document 4:::
This list consists of common foods with their cholesterol content recorded in milligrams per 100 grams (3.5 ounces) of food.
Functions
Cholesterol is a sterol, a steroid-like lipid made by animals, including humans. The human body makes one-eighth to one-fourth teaspoons of pure cholesterol daily. A cholesterol level of 5.5 millimoles per litre or below is recommended for an adult. The rise of cholesterol in the body can give a condition in which excessive cholesterol is deposited in artery walls called atherosclerosis. This condition blocks the blood flow to vital organs which can result in high blood pressure or stroke.
Cholesterol is not always bad. It's a vital part of the cell wall and a precursor to substances such as brain matter and some sex hormones. There are some types of cholesterol which are beneficial to the heart and blood vessels. High-density lipoprotein is commonly called "good" cholesterol. These lipoproteins help in the removal of cholesterol from the cells, which is then transported back to the liver where it is disintegrated and excreted as waste or broken down into parts.
Cholesterol content of various foods
See also
Nutrition
Plant stanol ester
Fatty acid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Bile salts produced by the liver assist in breaking apart what kind of fats?
A. soluble
B. carbohydrates
C. dietary
D. Sugar
Answer:
|
|
sciq-3999
|
multiple_choice
|
What do fish have that allow them to “breathe” oxygen in water?
|
[
"pores",
"gills",
"lungs",
"layers"
] |
B
|
Relavent Documents:
Document 0:::
Fish gills are organs that allow fish to breathe underwater. Most fish exchange gases like oxygen and carbon dioxide using gills that are protected under gill covers (operculum) on both sides of the pharynx (throat). Gills are tissues that are like short threads, protein structures called filaments. These filaments have many functions including the transfer of ions and water, as well as the exchange of oxygen, carbon dioxide, acids and ammonia. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide.
Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. Within the gill filaments, capillary blood flows in the opposite direction to the water, causing counter-current exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Some fish, like sharks and lampreys, possess multiple gill openings. However, bony fish have a single gill opening on each side. This opening is hidden beneath a protective bony cover called the operculum.
Juvenile bichirs have external gills, a very primitive feature that they share with larval amphibians.
Previously, the evolution of gills was thought to have occurred through two diverging lines: gills formed from the endoderm, as seen in jawless fish species, or those form by the ectoderm, as seen in jawed fish. However, recent studies on gill formation of the little skate (Leucoraja erinacea) has shown potential evidence supporting the claim that gills from all current fish species have in fact evolved from a common ancestor.
Breathing with gills
Air breathing fish can be divided into obligate air breathers and facultative air breathers. Obligate air breathers, such as the African lungfish, are obligated to breathe air periodically or they suffocate. Facultative air breathers, such as the catfish Hypostomus plecostomus, only breathe air if they need to and can otherwise rely on their gills f
Document 1:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
Document 2:::
Aquatic respiration is the process whereby an aquatic organism exchanges respiratory gases with water, obtaining oxygen from oxygen dissolved in water and excreting carbon dioxide and some other metabolic waste products into the water.
Unicellular and simple small organisms
In very small animals, plants and bacteria, simple diffusion of gaseous metabolites is sufficient for respiratory function and no special adaptations are found to aid respiration. Passive diffusion or active transport are also sufficient mechanisms for many larger aquatic animals such as many worms, jellyfish, sponges, bryozoans and similar organisms. In such cases, no specific respiratory organs or organelles are found.
Higher plants
Although higher plants typically use carbon dioxide and excrete oxygen during photosynthesis, they also respire and, particularly during darkness, many plants excrete carbon dioxide and require oxygen to maintain normal functions. In fully submerged aquatic higher plants specialised structures such as stoma on leaf surfaces to control gas interchange. In many species, these structures can be controlled to be open or closed depending on environmental conditions. In conditions of high light intensity and relatively high carbonate ion concentrations, oxygen may be produced in sufficient quantities to form gaseous bubbles on the surface of leaves and may produce oxygen super-saturation in the surrounding water body.
Animals
All animals that practice truly aquatic respiration are poikilothermic. All aquatic homeothermic animals and birds including cetaceans and penguins are air breathing despite a fully aquatic life-style.
Echinoderms
Echinoderms have a specialised water vascular system which provides a number of functions including providing the hydraulic power for tube feet but also serves to convey oxygenated sea water into the body and carry waste water out again. In many genera, the water enters through a madreporite, a sieve like structure on the upper surfac
Document 3:::
Amphibious fish are fish that are able to leave water for extended periods of time. About 11 distantly related genera of fish are considered amphibious. This suggests that many fish genera independently evolved amphibious traits, a process known as convergent evolution. These fish use a range of terrestrial locomotory modes, such as lateral undulation, tripod-like walking (using paired fins and tail), and jumping. Many of these locomotory modes incorporate multiple combinations of pectoral-, pelvic-, and tail-fin movement.
Many ancient fish had lung-like organs, and a few, such as the lungfish and bichir, still do. Some of these ancient "lunged" fish were the ancestors of tetrapods. In most recent fish species, though, these organs evolved into the swim bladders, which help control buoyancy. Having no lung-like organs, modern amphibious fish and many fish in oxygen-poor water use other methods, such as their gills or their skin to breathe air. Amphibious fish may also have eyes adapted to allow them to see clearly in air, despite the refractive index differences between air and water.
List of amphibious fish
Lung breathers
Lungfish (Dipnoi): Six species have limb-like fins, and can breathe air. Some are obligate air breathers, meaning they will drown if not given access to breathe air. All but one species bury in the mud when the body of water they live in dries up, surviving up to two years until water returns.
Bichir (Polypteridae): These 12 species are the only ray-finned fish to retain lungs. They are facultative air breathers, requiring access to surface air to breathe in poorly oxygenated water.
Various other "lunged" fish: now extinct, a few of this group were ancestors of the stem tetrapods that led to all tetrapods: Lissamphibia, sauropsids and mammals.
Gill or skin breathers
Rockskippers: These blennies are found on islands in the Indian and Pacific Oceans. They come onto land to catch prey and escape aquatic predators, often for 20 minutes or more.
Document 4:::
The swim bladder, gas bladder, fish maw, or air bladder is an internal gas-filled organ that contributes to the ability of many bony fish (but not cartilaginous fish) to control their buoyancy, and thus to stay at their current water depth without having to expend energy in swimming. Also, the dorsal position of the swim bladder means the center of mass is below the center of volume, allowing it to act as a stabilizing agent. Additionally, the swim bladder functions as a resonating chamber, to produce or receive sound.
The swim bladder is evolutionarily homologous to the lungs of tetrapods and lungfish. Charles Darwin remarked upon this in On the Origin of Species. Darwin reasoned that the lung in air-breathing vertebrates had derived from a more primitive swim bladder as a specialized form of enteral respiration.
In the embryonic stages, some species, such as redlip blenny, have lost the swim bladder again, mostly bottom dwellers like the weather fish. Other fish—like the opah and the pomfret—use their pectoral fins to swim and balance the weight of the head to keep a horizontal position. The normally bottom dwelling sea robin can use their pectoral fins to produce lift while swimming.
The gas/tissue interface at the swim bladder produces a strong reflection of sound, which is used in sonar equipment to find fish.
Cartilaginous fish, such as sharks and rays, do not have swim bladders. Some of them can control their depth only by swimming (using dynamic lift); others store fats or oils with density less than that of seawater to produce a neutral or near neutral buoyancy, which does not change with depth.
Structure and function
The swim bladder normally consists of two gas-filled sacs located in the dorsal portion of the fish, although in a few primitive species, there is only a single sac. It has flexible walls that contract or expand according to the ambient pressure. The walls of the bladder contain very few blood vessels and are lined with guanine crystals,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do fish have that allow them to “breathe” oxygen in water?
A. pores
B. gills
C. lungs
D. layers
Answer:
|
|
sciq-7706
|
multiple_choice
|
What is a buildup of electric charges on objects?
|
[
"conduction",
"potential",
"wattage",
"static"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
is a series of educational Japanese manga books. Each volume explains a particular subject in science or mathematics. The series is published in Japan by Ohmsha, in America by No Starch Press, in France by H&K, in Italy by L'Espresso, in Malaysia by Pelangi, and in Taiwan by 世茂出版社. Different volumes are written by different authors.
Volume list
The series to date of February 18, 2023 consists of 50 volumes in Japan. Fourteen of them have been published in English and six in French so far, with more planned, including one on sociology. In contrast, 49 of them have been published and translated in Chinese. One of the books has been translated into Swedish.
The Manga Guide to Electricity
This 207-page guide consists of five chapters, excluding the preface, prologue, and epilogue. It explains fundamental concepts in the study of electricity, including Ohm's law and Fleming's rules. There are written explanations after each manga chapter. An index and two pages to write notes on are provided.
The story begins with Rereko, an average high-school student who lives in Electopia (the land of electricity), failing her final electricity exam. She was forced to skip her summer vacation and go to Earth for summer school. The high school teacher Teteka sensei gave her a “transdimensional walkie-talkie and observation robot” named Yonosuke, which she will use later for going back and forth to Earth. Rereko then met her mentor Hikaru sensei, who did Electrical Engineering Research at a university in Tokyo, Japan. Hikaru sensei explained to Rereko the basic components of electricity with occasional humorous moments.
In the fifth chapter, Hikaru sensei told Rereko her studies are over. Yonosuke soon received Electopia’s call to pick Rereko up. Hikaru sensei told her that he learned a lot from teaching her, and she should keep at it, even back on Electopia. Rereko told Hikaru sensei to keep working on his research and clean his room often. Her sentence was interrupted, and she wa
Document 3:::
This is a list of electrical phenomena. Electrical phenomena are a somewhat arbitrary division of electromagnetic phenomena.
Some examples are:
Biefeld–Brown effect — Thought by the person who coined the name, Thomas Townsend Brown, to be an anti-gravity effect, it is generally attributed to electrohydrodynamics (EHD) or sometimes electro-fluid-dynamics, a counterpart to the well-known magneto-hydrodynamics.
Bioelectrogenesis — The generation of electricity by living organisms.
Capacitive coupling — Transfer of energy within an electrical network or between distant networks by means of displacement current.
Contact electrification — The phenomenon of electrification by contact. When two objects were touched together, sometimes the objects became spontaneously charged (οne negative charge, one positive charge).
Corona effect — Build-up of charges in a high-voltage conductor (common in AC transmission lines), which ionizes the air and produces visible light, usually purple.
Dielectric polarization — Orientation of charges in certain insulators inside an external static electric field, such as when a charged object is brought close, which produces an electric field inside the insulator.
Direct Current — (old: Galvanic Current) or "continuous current"; The continuous flow of electricity through a conductor such as a wire from high to low potential.
Electromagnetic induction — Production of a voltage by a time-varying magnetic flux.
Electroluminescence — The phenomenon wherein a material emits light in response to an electric current passed through it, or to a strong electric field.
Electrostatic induction — Redistribution of charges in a conductor inside an external static electric field, such as when a charged object is brought close.
Electrical conduction — The movement of electrically charged particles through transmission medium.
Electric shock — Physiological reaction of a biological organism to the passage of electric current through its body.
Ferranti effect
Document 4:::
Electrical energy is energy related to forces on electrically-charged particles and the movement of those particles (often electrons in wires, but not always). This energy is supplied by the combination of current and electric potential (often referred to as voltage because electric potential is measured in volts) that is delivered by a circuit (e.g., provided by an electric power utility). Motion (current) is not required; for example, if there is a voltage difference in combination with charged particles, such as static electricity or a charged capacitor, the moving electrical energy is typically converted to another form of energy (e.g., thermal, motion, sound, light, radio waves, etc.).
Electrical energy is usually sold by the kilowatt hour (1 kW·h = 3.6 MJ) which is the product of the power in kilowatts multiplied by running time in hours. Electric utilities measure energy using an electricity meter, which keeps a running total of the electric energy delivered to a customer.
Electric heating is an example of converting electrical energy into another form of energy, heat. The simplest and most common type of electric heater uses electrical resistance to convert the energy. There are other ways to use electrical energy. In computers for example, tiny amounts of electrical energy are rapidly moving into, out of, and through millions of transistors, where the energy is both moving (current through a transistor) and non-moving (electric charge on the gate of a transistor which controls the current going through).
Electricity generation
Electricity generation is the process of generating electrical energy from other forms of energy.
The fundamental principle of electricity generation was discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electric current is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet.
For electrical utilities, it is th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a buildup of electric charges on objects?
A. conduction
B. potential
C. wattage
D. static
Answer:
|
|
ai2_arc-815
|
multiple_choice
|
A combination of processes occurs when stars are forming. Which is one process most likely associated with the formation of new stars?
|
[
"Hydrogen in the cores of the stars is exhausted.",
"Material accumulates from stars that have died.",
"Elements in the stars such as iron undergo fusion.",
"Cores of stars become twice as massive as the Sun."
] |
B
|
Relavent Documents:
Document 0:::
Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons (protons and neutrons) and nuclei. According to current theories, the first nuclei were formed a few minutes after the Big Bang, through nuclear reactions in a process called Big Bang nucleosynthesis. After about 20 minutes, the universe had expanded and cooled to a point at which these high-energy collisions among nucleons ended, so only the fastest and simplest reactions occurred, leaving our universe containing hydrogen and helium. The rest is traces of other elements such as lithium and the hydrogen isotope deuterium. Nucleosynthesis in stars and their explosions later produced the variety of elements and isotopes that we have today, in a process called cosmic chemical evolution. The amounts of total mass in elements heavier than hydrogen and helium (called 'metals' by astrophysicists) remains small (few percent), so that the universe still has approximately the same composition.
Stars fuse light elements to heavier ones in their cores, giving off energy in the process known as stellar nucleosynthesis. Nuclear fusion reactions create many of the lighter elements, up to and including iron and nickel in the most massive stars. Products of stellar nucleosynthesis remain trapped in stellar cores and remnants except if ejected through stellar winds and explosions. The neutron capture reactions of the r-process and s-process create heavier elements, from iron upwards.
Supernova nucleosynthesis within exploding stars is largely responsible for the elements between oxygen and rubidium: from the ejection of elements produced during stellar nucleosynthesis; through explosive nucleosynthesis during the supernova explosion; and from the r-process (absorption of multiple neutrons) during the explosion.
Neutron star mergers are a recently discovered major source of elements produced in the r-process. When two neutron stars collide, a significant amount of neutron-rich matter may be ej
Document 1:::
In astrophysics, accretion is the accumulation of particles into a massive object by gravitationally attracting more matter, typically gaseous matter, into an accretion disk. Most astronomical objects, such as galaxies, stars, and planets, are formed by accretion processes.
Overview
The accretion model that Earth and the other terrestrial planets formed from meteoric material was proposed in 1944 by Otto Schmidt, followed by the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978, Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these models proved completely successful, and many of the proposed theories were descriptive.
The 1944 accretion model by Otto Schmidt was further developed in a quantitative way in 1969 by Viktor Safronov. He calculated, in detail, the different stages of terrestrial planet formation. Since then, the model has been further developed using intensive numerical simulations to study planetesimal accumulation. It is now accepted that stars form by the gravitational collapse of interstellar gas. Prior to collapse, this gas is mostly in the form of molecular clouds, such as the Orion Nebula. As the cloud collapses, losing potential energy, it heats up, gaining kinetic energy, and the conservation of angular momentum ensures that the cloud forms a flattened disk—the accretion disk.
Accretion of galaxies
A few hundred thousand years after the Big Bang, the Universe cooled to the point where atoms could form. As the Universe continued to expand and cool, the atoms lost enough kinetic energy, and dark matter coalesced sufficiently, to form protogalaxies. As further accretion occurred, galaxies formed. Indirect evidence is widespread. Galaxies grow through mergers and smooth gas accretion. Accretion also occurs inside galaxies, forming stars.
Accretion of stars
Stars are thought to form inside giant clouds of cold
Document 2:::
Star formation is the process by which dense regions within molecular clouds in interstellar space, sometimes referred to as "stellar nurseries" or "star-forming regions", collapse and form stars. As a branch of astronomy, star formation includes the study of the interstellar medium (ISM) and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. It is closely related to planet formation, another branch of astronomy. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Most stars do not form in isolation but as part of a group of stars referred as star clusters or stellar associations.
Stellar nurseries
Interstellar clouds
Spiral galaxies like the Milky Way contain stars, stellar remnants, and a diffuse interstellar medium (ISM) of gas and dust. The interstellar medium consists of 104 to 106 particles per cm3, and is typically composed of roughly 70% hydrogen, 28% helium, and 1.5% heavier elements by mass. The trace amounts of heavier elements were and are produced within stars via stellar nucleosynthesis and ejected as the stars pass beyond the end of their main sequence lifetime. Higher density regions of the interstellar medium form clouds, or diffuse nebulae, where star formation takes place. In contrast to spiral galaxies, elliptical galaxies lose the cold component of its interstellar medium within roughly a billion years, which hinders the galaxy from forming diffuse nebulae except through mergers with other galaxies.
In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so these nebulae are called molecular clouds. The Herschel Space Observatory has revealed that filaments, or elongated dense gas structures, are truly ubiquitous in molecular clouds and central to the star formation process. They fr
Document 3:::
Stellar chemistry is the study of the chemical composition of astronomical objects; stars in particular, hence the name stellar chemistry. The significance of stellar chemical composition is an open ended question at this point. Some research asserts that a greater abundance of certain elements (such as carbon, sodium, silicon, and magnesium) in the stellar mass are necessary for a star's inner solar system to be habitable over long periods of time. The hypothesis being that the "abundance of these elements make the star cooler and cause it to evolve more slowly, thereby giving planets in its habitable zone more time to develop life as we know it." Stellar abundance of oxygen also appears to be critical to the length of time newly developed planets exist in a habitable zone around their host star. Researchers postulate that if our own sun had a lower abundance of oxygen, the Earth would have ceased to "live" in a habitable zone a billion years ago, long before complex organisms had the opportunity to evolve.
Other research
Other research is being or has been done in numerous areas relating to the chemical nature of stars. The formation of stars is of particular interest. Research published in 2009 presents spectroscopic observations of so-called "young stellar objects" viewed in the Large Magellanic Cloud with the Spitzer Space Telescope. This research suggests that water, or, more specifically, ice, plays a large role in the formation of these eventual stars
Others are researching much more tangible ideas relating to stars and chemistry. Research published in 2010 studied the effects of a strong stellar flare on the atmospheric chemistry of an Earth-like planet orbiting an M dwarf star, specifically, the M dwarf AD Leonis. This research simulated the effects an observed flare produced by AD Leonis on April 12, 1985 would have on a hypothetical Earth-like planet. After simulating the effects of both UV radiation and protons on the hypothetical planet's a
Document 4:::
In astrophysics, silicon burning is a very brief sequence of nuclear fusion reactions that occur in massive stars with a minimum of about 8–11 solar masses. Silicon burning is the final stage of fusion for massive stars that have run out of the fuels that power them for their long lives in the main sequence on the Hertzsprung–Russell diagram. It follows the previous stages of hydrogen, helium, carbon, neon and oxygen burning processes.
Silicon burning begins when gravitational contraction raises the star's core temperature to 2.7–3.5 billion kelvins (GK). The exact temperature depends on mass. When a star has completed the silicon-burning phase, no further fusion is possible. The star catastrophically collapses and may explode in what is known as a Type II supernova.
Nuclear fusion sequence and silicon photodisintegration
After a star completes the oxygen-burning process, its core is composed primarily of silicon and sulfur. If it has sufficiently high mass, it further contracts until its core reaches temperatures in the range of 2.7–3.5 GK (230–300 keV). At these temperatures, silicon and other elements can photodisintegrate, emitting a proton or an alpha particle. Silicon burning proceeds by photodisintegration rearrangement, which creates new elements by the alpha process, adding one of these freed alpha particles (the equivalent of a helium nucleus) per capture step in the following sequence (photoejection of alphas not shown):
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|}
Although the chain could theoretically continue, steps after nickel-56 are much less exothermic and the temperature is so high that photodisintegration prevents further progress.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A combination of processes occurs when stars are forming. Which is one process most likely associated with the formation of new stars?
A. Hydrogen in the cores of the stars is exhausted.
B. Material accumulates from stars that have died.
C. Elements in the stars such as iron undergo fusion.
D. Cores of stars become twice as massive as the Sun.
Answer:
|
|
sciq-1769
|
multiple_choice
|
What are species that are first to colonize a disturbed area called?
|
[
"brave species",
"pioneer species",
"exploratory species",
"novel species"
] |
B
|
Relavent Documents:
Document 0:::
Colonisation or colonization is the process in biology by which a species spreads to new areas. Colonisation often refers to successful immigration where a population becomes integrated into an ecological community, having resisted initial local extinction. In ecology, it is represented by the symbol λ (lowercase lambda) to denote the long-term intrinsic growth rate of a population.
One classic scientific model in biogeography posits that a species must continue to colonize new areas through its life cycle (called a taxon cycle) in order to achieve longevity. Accordingly, colonisation and extinction are key components of island biogeography, a theory that has many applications in ecology, such as metapopulations.
Scale
Colonisation occurs on several scales. In the most basic form, as biofilm in the formation of communities of microorganisms on surfaces. In small scales such as colonising new sites, perhaps as a result of environmental change. And on larger scales where a species expands its range to encompass new areas. This can be via a series of small encroachments, such as in woody plant encroachment, or by long-distance dispersal. The term range expansion is also used.
Use
The term is generally only used to refer to the spread of a species into new areas by natural means, as opposed to unnatural introduction or translocation by humans, which may lead to invasive species.
Colonisation events
Large-scale notable pre-historic colonisation events include:
Arthropods
the colonisation of the earth's land by the first animals, the arthropods. The first fossils of land animals come from millipedes. These were seen about 450 million years ago (Dunn, 2013).
Humans
the early human migration and colonisation of areas outside Africa according to the recent African origin paradigm, resulting in the extinction of Pleistocene megafauna, although the role of humans in this event is controversial.
Some large-scale notable colonisation events during the 20th century are:
Document 1:::
A cultural keystone species is one which is of exceptional significance to a particular culture or a people. Such species can be identified by their prevalence in language, cultural practices (e.g. ceremonies), traditions, diet, medicines, material items, and histories of a community. These species influence social systems and culture and are a key feature of a community's identity.
The concept was first proposed by Gary Nabhan and John Carr in 1994 and later described by Sergio Cristancho and Joanne Vining in 2000 and by ethnobotanist Ann Garibaldi and ethnobiologist Nancy Turner in 2004. It is a "metaphorical parallel" to the ecological keystone species concept, and may be useful for biodiversity conservation and ecological restoration.
Definitions
The exact definition of cultural keystone species remains under debate and is considered to be more abstract than the related ecological concept. Garibaldi and Turner emphasize that the cultural keystone species concept is not an extension of ecological keystone species, but rather a parallel concept that bridges social and physical sciences, as well as indigenous knowledge and western knowledge, to offer a more holistic approach. Other researchers debate whether or not cultural keystone species are different from economically important species. Additionally, it is argued that the concept will be reduced to a biological term if it only focuses on specific species, but this may be solved by considering cultural keystone species as a "complex" that develops based on the ways that the species is used and its impacts on cultural practices over time, through conscious social practices, decision-making processes, and changes to societal needs and practices.
Garibaldi and Turner outline six elements that should be considered when identifying a cultural keystone species:
The magnitude and variety of ways the species is used
The species' influence on language
The species' role in cultural practices (e.g. traditional prac
Document 2:::
In botany, a neophyte (from Greek νέος (néos) "new" and φυτόν (phutón) "plant") is a plant species which is not native to a geographical region and was introduced in recent history. Non-native plants that are long-established in an area are called archaeophytes. In Britain, neophytes are defined more specifically as plant species that were introduced after 1492, when Christopher Columbus arrived in the New World and the Columbian Exchange began.
Terminology
The terminology of the invasion biology is very uneven. In the English-speaking world, terms such as invasive species or the like are mainly used, which is interpreted differently and do not differentiate between different groups of animals or characteristics of the species. The International Union for Conservation of Nature and Natural Resources (IUCN) differentiates in its definitions between alien species and invasive alien species; Alien species are species that have been introduced into a foreign area through human influence. The invasive attribute (invasive) species are assigned that displace native species in their new habitat.
In English, summarizing terms such as alien species (foreign species) or, in the case of suppressing potencies, invasive species (invasive species) are used without differentiating between plants, animals and fungi. However, the term "neonative" was proposed.
Definition
In addition to the inconsistency, the xenophobic connotation of invasive and alien was criticized. The neutral designation Neobiota unites all species that have colonized new areas through human influence. However, the terms with neo are not used in a completely uniform way:
According to one opinion, the terms neobiota or neophytes or neozoa apply regardless of when a species was introduced.
According to another understanding, these names only apply to species introduced from 1492 onwards. The year of the discovery of America by Columbus was chosen as the border because it marks the beginning of the intensive e
Document 3:::
This is a list of taxa whose location or distribution is notably difficult to explain; e.g., species which came to occupy a range distant from that of their closest relatives by a process or history that is not understood, or is a subject of controversy.
Specific taxa
Mammals
Falkland Islands wolf
Gansu mole
Pennant's colobus
Birds
Elephant birds
Moa
Nicobar megapode
Reptiles
Brachylophus
Lapitiguana
Phelsuma andamanense
Assemblages of taxa
Lusitanian flora
Document 4:::
Future Evolution is a book written by paleontologist Peter Ward and illustrated by Alexis Rockman. He addresses his own opinion of future evolution and compares it with Dougal Dixon's After Man: A Zoology of the Future and H. G. Wells's The Time Machine.
According to Ward, humanity may exist for a long time. Nevertheless, we are impacting our planet. He splits his book in different chronologies, starting with the near future (the next 1,000 years). Humanity would be struggling to support a massive population of 11 billion. Global warming raises sea levels. The ozone layer weakens. Most of the available land is devoted to agriculture due to the demand for food. Despite all this, the oceanic wildlife remains untethered by most of these impacts, specifically the commercial farmed fish. This is, according to Ward, an era of extinction that would last about 10 million years (note that many human-caused extinctions have already occurred). After that, Earth gets stranger.
Ward labels the species that have the potential to survive in a human-infested world. These include dandelions, raccoons, owls, pigs, cattle, rats, snakes, and crows to name but a few. In the human-infested ecosystem, those preadapted to live amongst man survived and prospered. Ward describes garbage dumps 10 million years in the future infested with multiple species of rats, a snake with a sticky frog-like tongue to snap up rodents, and pigs with snouts specialized for rooting through garbage. The story's time traveller who views this new refuse-covered habitat is gruesomely attacked by ravenous flesh-eating crows.
Ward then questions the potential for humanity to evolve into a new species. According to him, this is incredibly unlikely. For this to happen a human population must isolate itself and interbreed until it becomes a new species. Then he questions if humanity would survive or extinguish itself by climate change, nuclear war, disease, or the posing threat of nanotechnology as terrorist weapon
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are species that are first to colonize a disturbed area called?
A. brave species
B. pioneer species
C. exploratory species
D. novel species
Answer:
|
|
sciq-5632
|
multiple_choice
|
What causes a small scrap of paper placed on top of the water droplet to float , although the object is denser (heavier) than the water?
|
[
"diffusion",
"surface tension",
"van der waals force",
"transfusion"
] |
B
|
Relavent Documents:
Document 0:::
Upstream contamination by floating particles is a counterintuitive phenomenon in fluid dynamics. When pouring water from a higher container to a lower one, particles floating in the latter can climb upstream into the upper container. A definitive explanation is still lacking: experimental and computational evidence indicates that the contamination is chiefly driven by surface tension gradients, however the phenomenon is also affected by the dynamics of swirling flows that remain to be fully investigated.
Origins
The phenomenon was observed in 2008 by the Argentine Sebastian Bianchini during mate tea preparation, while studying physics at the University of Havana.
It rapidly attracted the interest of professor Alejandro Lage-Castellanos, who performed, with Bianchini, a series of controlled experiments. Later on professor Ernesto Altshuler completed the trio in Havana, which resulted in the Diploma thesis of Bianchini and a short original paper posted in the web arXiv and mentioned as a surprising fact in some online journals.
Bianchini's Diploma thesis showed that the phenomenon could be reproduced in a controlled laboratory setting using mate leaves or chalk powder as contaminants, and that temperature gradients (hot in the top, cold in the bottom) were not necessary to generate the effect. The research also showed that surface tension was key to the explanation through the Marangoni effect. This was suggested by two facts: (a) both mate and chalk lowered the surface tension of water, and (b) if an industrial surfactant was added on the upper reservoir, the upstream motion of particles would stop.
Confirmation
After a talk by Lage-Castellanos at the First Workshop on Complex Matter Physics in Havana (MarchCOMeeting'2012), professor Troy Shinbrot of Rutgers University became interested in the subject. Together with student Theo Siu, Cuban results were confirmed and expanded with new experiments and numerical simulations at Rutgers, which resulted in a joint pee
Document 1:::
The water thread experiment is a phenomenon that occurs when two containers of deionized water, placed on an insulator, are connected by a thread, then a high-voltage positive electric charge is applied to one container, and a negative charge to the other. At a critical voltage, an unsupported water liquid bridge is formed between the containers, which will remain even when they are separated. The phenomenon was first reported in 1893 in a public lecture by the British engineer William Armstrong.
The bridge as observed in a typical configuration has a diameter of 1–3 mm so the bridge remains intact when pulled as far as , and remains stable up to 45 minutes. The surface temperature also rises from an initial surface temperature of up to before breakdown.
Experiment
In a typical experiment, two 100 mL beakers are filled with deionized water to roughly 3 mm below the edge of the beaker, and the water exposed to 15 kV direct current, with one beaker turning negative, and the other positive. After building up electric charge, the water then spontaneously rises along the thread over the glass walls and forms a "water bridge" between them. When one beaker is slowly pushed away from the other, the structure remains. When the voltage rises to 25 kV, the structure can be pulled apart as far as . If the thread is very short, then the force of the water may be strong enough to push the thread from the positive glass into the negative glass.
The water generally travels from anode to cathode, but the direction may vary due to the different surface charge that builds up at the water bridge surface, which will generate electrical shear stresses of different signs. The bridge breaks into droplets due to capillary action when the beakers are pulled apart at a critical distance, or the voltage is reduced to a critical value.
The bridge needs clean, deionized water to be formed, and its stability is dramatically reduced as ions are introduced into the liquid (by either adding sa
Document 2:::
Capillary action (sometimes called capillarity, capillary motion, capillary rise, capillary effect, or wicking) is the process of a liquid flowing in a narrow space without the assistance of, or even in opposition to, any external forces like gravity.
The effect can be seen in the drawing up of liquids between the hairs of a paint-brush, in a thin tube such as a straw, in porous materials such as paper and plaster, in some non-porous materials such as sand and liquefied carbon fiber, or in a biological cell.
It occurs because of intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension (which is caused by cohesion within the liquid) and adhesive forces between the liquid and container wall act to propel the liquid.
Etymology
Capillary comes from the Latin word capillaris, meaning "of or resembling hair." The meaning stems from the tiny, hairlike diameter of a capillary. While capillary is usually used as a noun, the word also is used as an adjective, as in "capillary action," in which a liquid is moved along — even upward, against gravity — as the liquid is attracted to the internal surface of the capillaries.
History
The first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action. In 1660, capillary action was still a novelty to the Irish chemist Robert Boyle, when he reported that "some inquisitive French Men" had observed that when a capillary tube was dipped into water, the water would ascend to "some height in the Pipe". Boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. He found that the vacuum had no observable influence on the height of the liquid in the capillary, so the behavior of liquids in capillary tubes was due to some phenomenon different from that
Document 3:::
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape.
The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids.
A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container.
Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars).
Introduction
Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid.
A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a containe
Document 4:::
Flotation of flexible objects is a phenomenon in which the bending of a flexible material allows an object to displace a greater amount of fluid than if it were completely rigid. This ability to displace more fluid translates directly into an ability to support greater loads, giving the flexible structure an advantage over a similarly rigid one. Inspiration to study the effects of elasticity are taken from nature, where plants, such as black pepper, and animals living at the water surface have evolved to take advantage of the load-bearing benefits elasticity imparts.
History
In his work "On Floating Bodies", Archimedes famously stated:
While this basic idea carried enormous weight and has come to form the basis of understanding why objects float, it is best applied for objects with a characteristic length scale greater than the capillary length. What Archimedes had failed to predict was the influence of surface tension and its impact at small length scales.
More recent works, such as that of Keller, have extended these principles by considering the role of surface tension forces on partially submerged bodies. Keller, for instance, demonstrated analytically that the weight of water displaced by a meniscus is equal to the vertical component of the surface tension force.
Nonetheless, the role of flexibility and its impact on an object's load-bearing potential is one that did receive attention until the mid-2000s and onward. In an initial study, Vella studied the load supported by a raft composed of thin, rigid strips. Specifically, he compared the case of floating individual strips to floating an aggregation of strips, wherein the aggregate structure causes portions of the meniscus (and hence, resulting surface tension force) to disappear. By extending his analysis to consider a similar system composed of thin strips of some finite bending stiffness, he found that this later case in fact was able support a greater load.
A well known work in the area of surface t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What causes a small scrap of paper placed on top of the water droplet to float , although the object is denser (heavier) than the water?
A. diffusion
B. surface tension
C. van der waals force
D. transfusion
Answer:
|
|
sciq-3947
|
multiple_choice
|
What is needed to provide cells with the oxygen they need for cellular respiration?
|
[
"photosynthesis",
"vascular tissue",
"gas exchange",
"passive transport"
] |
C
|
Relavent Documents:
Document 0:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 1:::
Maintenance respiration (or maintenance energy) refers to metabolism occurring in an organism that is needed to maintain that organism in a healthy, living state. Maintenance respiration contrasts with growth respiration, which is responsible for the synthesis of new structures in growth, nutrient uptake, nitrogen (N) reduction and phloem loading, whereas maintenance respiration is associated with protein and membrane turnover and maintenance of ion concentrations and gradients.
In plants
Maintenance respiration in plants refers to the amount of cellular respiration, measured by the carbon dioxide (CO2) released or oxygen (O2) consumed, during the generation of usable energy (mainly ATP, NADPH, and NADH) and metabolic intermediates used for (i) resynthesis of compounds that undergo renewal (turnover) in the normal process of metabolism (examples are enzymatic proteins, ribonucleic acids, and membrane lipids); (ii) maintenance of chemical gradients of ions and metabolites across cellular membranes that are necessary for cellular integrity and plant health; and (iii) operation of metabolic processes involved in physiological adjustment (i.e., acclimation) to a change in the plant's environment. The metabolic costs of the repair of injury from biotic or abiotic stress may also be considered a part of maintenance respiration.
Maintenance respiration is essential for biological health and growth of plants. It is estimated that about half of the respiration carried out by terrestrial plants during their lifetime is for the support of maintenance processes. Because typically more than half of global terrestrial plant photosynthesis (or gross primary production) is used for plant respiration, more than one quarter of global terrestrial plant photosynthesis is presumably consumed in maintenance respiration.
Maintenance respiration is a key component of most physiologically based mathematical models of plant growth, including models of crop growth and yield and models of
Document 2:::
Respirocytes are hypothetical, microscopic, artificial red blood cells that are intended to emulate the function of their organic counterparts, so as to supplement or replace the function of much of the human body's normal respiratory system. Respirocytes were proposed by Robert A. Freitas Jr in his 1998 paper "A Mechanical Artificial Red Blood Cell: Exploratory Design in Medical Nanotechnology".
Respirocytes are an example of molecular nanotechnology, a field of technology still in the very earliest, purely hypothetical phase of development. Current technology is not sufficient to build a respirocyte due to considerations of power, atomic-scale manipulation, immune reaction or toxicity, computation and communication.
Structure of a respirocyte
Freitas proposed a spherical robot made up of 18 billion atoms arranged as a tiny pressure tank, which would be filled up with oxygen and carbon dioxide.
Uses
In Freitas' proposal, each respirocyte could store and transport 236 times more oxygen than a natural red blood cell, and could release it in a more controlled manner.
Freitas has also proposed "microbivore" robots that would attack pathogens in the manner of white blood cells.
See also
Artificial cell
Biotechnology
Blood substitute
Oxycyte
Synthetic biology
Document 3:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 4:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is needed to provide cells with the oxygen they need for cellular respiration?
A. photosynthesis
B. vascular tissue
C. gas exchange
D. passive transport
Answer:
|
|
sciq-1626
|
multiple_choice
|
As ph increases what happens to a solution?
|
[
"stays the same",
"becomes less basic",
"depends on the solution",
"becomes more basic"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2.
Exam
The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories:
Purpose
According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science."
Discontinuation
Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses.
Grade distribution
The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows:
Document 4:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
As ph increases what happens to a solution?
A. stays the same
B. becomes less basic
C. depends on the solution
D. becomes more basic
Answer:
|
|
sciq-5020
|
multiple_choice
|
Cells that are divided by oncogenes contain damaged what?
|
[
"cells",
"atoms",
"dna",
"bacteria"
] |
C
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
Stem cell markers are genes and their protein products used by scientists to isolate and identify stem cells. Stem cells can also be identified by functional assays. Below is a list of genes/protein products that can be used to identify various types of stem cells, or functional assays that do the same. The initial version of the list below was obtained by mining the PubMed database as described in
Stem cell marker names
Document 2:::
Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u
Document 3:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 4:::
Cytochemistry is the branch of cell biology dealing with the detection of cell constituents by means of biochemical analysis and visualization techniques. This is the study of the localization of cellular components through the use of staining methods. The term is also used to describe a process of identification of the biochemical content of cells. Cytochemistry is a science of localizing chemical components of cells and cell organelles on thin histological sections by using several techniques like enzyme localization, micro-incineration, micro-spectrophotometry, radioautography, cryo-electron microscopy, X-ray microanalysis by energy-dispersive X-ray spectroscopy, immunohistochemistry and cytochemistry, etc.
Freeze Fracture Enzyme Cytochemistry
Freeze fracture enzyme cytochemistry was initially mentioned in the study of Pinto de silva in 1987. It is a technique that allows the introduction of cytochemistry into a freeze fracture cell membrane. immunocytochemistry is used in this technique to label and visualize the cell membrane's molecules. This technique could be useful in analyzing the ultrastructure of cell membranes. The combination of immunocytochemistry and freeze fracture enzyme technique, research can identify and have a better understanding of the structure and distribution of a cell membrane.
Origin
Jean Brachet's research in Brussel demonstrated the localization and relative abundance between RNA and DNA in the cells of both animals and plants opened up the door into the research of cytochemistry. The work by Moller and Holter in 1976 about endocytosis which discussed the relationship between a cell's structure and function had established the needs of cytochemical research.
Aims
Cytochemical research aims to study individual cells that may contain several cell types within a tissue. It takes a nondestructive approach to study the localization of the cell. By remaining the cell components intact, researcher are able to study the intact cell activ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Cells that are divided by oncogenes contain damaged what?
A. cells
B. atoms
C. dna
D. bacteria
Answer:
|
|
ai2_arc-295
|
multiple_choice
|
When a compression wave travels through a medium, in what direction is the medium displaced?
|
[
"upward",
"downward",
"in the same direction",
"in the opposite direction"
] |
C
|
Relavent Documents:
Document 0:::
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean
Document 1:::
In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave.
A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation.
Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves.
Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or "
Document 2:::
Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound), but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling.
A particle of the medium undergoes displacement according to the particle velocity of the sound wave traveling through the medium, while the sound wave itself moves at the speed of sound, equal to in air at .
Mathematical definition
Particle displacement, denoted δ, is given by
where v is the particle velocity.
Progressive sine waves
The particle displacement of a progressive sine wave is given by
where
is the amplitude of the particle displacement;
is the phase shift of the particle displacement;
is the angular wavevector;
is the angular frequency.
It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by
where
is the amplitude of the particle velocity;
is the phase shift of the particle velocity;
is the amplitude of the acoustic pressure;
is the phase shift of the acoustic pressure.
Taking the Laplace transforms of v and p with respect to time yields
Since , the amplitude of the specific acoustic impedance is given by
Consequently, the amplitude of the particle displacement is related to those of the particle velocity and the sound pressure by
See also
Sound
Sound particle
Particle velocity
Particle acceleration
Document 3:::
Longitudinal waves are waves in which the vibration of the medium is parallel to the direction the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal waves are also called compressional or compression waves, because they produce compression and rarefaction when traveling through a medium, and pressure waves, because they produce increases and decreases in pressure. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Real-world examples include sound waves (vibrations in pressure, a particle of displacement, and particle velocity propagated in an elastic medium) and seismic P-waves (created by earthquakes and explosions).
The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse waves, for instance, describe some bulk sound waves in solid materials (but not in fluids); these are also called "shear waves" to differentiate them from the (longitudinal) pressure waves that these materials also support.
Nomenclature
"Longitudinal waves" and "transverse waves" have been abbreviated by some authors as "L-waves" and "T-waves", respectively, for their own convenience. While these two abbreviations have specific meanings in seismology (L-wave for Love wave or long wave) and electrocardiography (see T wave), some authors chose to use "l-waves" (lowercase 'L') and "t-waves" instead, although they are not commonly found in physics writings except for some popular science books.
Sound waves
In the case of longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula
where:
y is the displacement of the point on the traveling sound wave;
x is the distance from the point to the wave's source;
t is the time elapsed;
y0 is the amplitude of the oscillations,
c is the speed of the wave;
Document 4:::
In a compressible sound transmission medium - mainly air - air particles get an accelerated motion: the particle acceleration or sound acceleration with the symbol a in metre/second2. In acoustics or physics, acceleration (symbol: a) is defined as the rate of change (or time derivative) of velocity. It is thus a vector quantity with dimension length/time2. In SI units, this is m/s2.
To accelerate an object (air particle) is to change its velocity over a period. Acceleration is defined technically as "the rate of change of velocity of an object with respect to time" and is given by the equation
where
a is the acceleration vector
v is the velocity vector expressed in m/s
t is time expressed in seconds.
This equation gives a the units of m/(s·s), or m/s2 (read as "metres per second per second", or "metres per second squared").
An alternative equation is:
where
is the average acceleration (m/s2)
is the initial velocity (m/s)
is the final velocity (m/s)
is the time interval (s)
Transverse acceleration (perpendicular to velocity) causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have
One common unit of acceleration is g-force, one g being the acceleration caused by the gravity of Earth.
In classical mechanics, acceleration is related to force and mass (assumed to be constant) by way of Newton's second law:
Equations in terms of other measurements
The Particle acceleration of the air particles a in m/s2 of a plain sound wave is:
See also
Sound
Sound particle
Particle displacement
Particle velocity
External links
Relationships of acoustic quantities associated with a plane progressive acoustic sound wave - pdf
Acoustics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When a compression wave travels through a medium, in what direction is the medium displaced?
A. upward
B. downward
C. in the same direction
D. in the opposite direction
Answer:
|
|
sciq-7266
|
multiple_choice
|
Catabolism and anabolism are the two types of what?
|
[
"calcium",
"cells",
"metabolism",
"heart rate"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 3:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 4:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Catabolism and anabolism are the two types of what?
A. calcium
B. cells
C. metabolism
D. heart rate
Answer:
|
|
sciq-8500
|
multiple_choice
|
Because plants are relatively immobile, they can function with bulky energy storage in the form of what?
|
[
"starch",
"dioxide",
"fat",
"protein"
] |
A
|
Relavent Documents:
Document 0:::
Ergastic substances are non-protoplasmic materials found in cells. The living protoplasm of a cell is sometimes called the bioplasm and distinct from the ergastic substances of the cell. The latter are usually organic or inorganic substances that are products of metabolism, and include crystals, oil drops, gums, tannins, resins and other compounds that can aid the organism in defense, maintenance of cellular structure, or just substance storage. Ergastic substances may appear in the protoplasm, in vacuoles, or in the cell wall.
Carbohydrates
Reserve carbohydrate of plants are the derivatives of the end products of photosynthesis. Cellulose and starch are the main ergastic substances of plant cells. Cellulose is the chief component of the cell wall, and starch occurs as a reserve material in the protoplasm.
Starch, as starch grains, arise almost exclusively in plastids, especially leucoplasts and amyloplasts.
Proteins
Although proteins are the main component of living protoplasm, proteins can occur as inactive, ergastic bodies—in an amorphous or crystalline (or crystalloid) form. A well-known amorphous ergastic protein is gluten.
Fats and oils
Fats (lipids) and oils are widely distributed in plant tissues. Substances related to fats—waxes, suberin, and cutin—occur as protective layers in or on the cell wall.
Crystals
Animals eliminate excess inorganic materials; plants mostly deposit such material in their tissues. Such mineral matter is mostly salts of calcium and anhydrides of silica.
Raphides are a type of elongated crystalline form of calcium oxalate aggregated in bundles within a plant cell. Because of the needle-like form, large numbers in the tissue of, say, a leaf can render the leaf unpalatable to herbivores (see Dieffenbachia and taro).
Druse
Cystolith
Document 1:::
Maintenance respiration (or maintenance energy) refers to metabolism occurring in an organism that is needed to maintain that organism in a healthy, living state. Maintenance respiration contrasts with growth respiration, which is responsible for the synthesis of new structures in growth, nutrient uptake, nitrogen (N) reduction and phloem loading, whereas maintenance respiration is associated with protein and membrane turnover and maintenance of ion concentrations and gradients.
In plants
Maintenance respiration in plants refers to the amount of cellular respiration, measured by the carbon dioxide (CO2) released or oxygen (O2) consumed, during the generation of usable energy (mainly ATP, NADPH, and NADH) and metabolic intermediates used for (i) resynthesis of compounds that undergo renewal (turnover) in the normal process of metabolism (examples are enzymatic proteins, ribonucleic acids, and membrane lipids); (ii) maintenance of chemical gradients of ions and metabolites across cellular membranes that are necessary for cellular integrity and plant health; and (iii) operation of metabolic processes involved in physiological adjustment (i.e., acclimation) to a change in the plant's environment. The metabolic costs of the repair of injury from biotic or abiotic stress may also be considered a part of maintenance respiration.
Maintenance respiration is essential for biological health and growth of plants. It is estimated that about half of the respiration carried out by terrestrial plants during their lifetime is for the support of maintenance processes. Because typically more than half of global terrestrial plant photosynthesis (or gross primary production) is used for plant respiration, more than one quarter of global terrestrial plant photosynthesis is presumably consumed in maintenance respiration.
Maintenance respiration is a key component of most physiologically based mathematical models of plant growth, including models of crop growth and yield and models of
Document 2:::
The Vegetable Production System (Veggie) is a plant growth system developed and used by NASA in outer space environments. The purpose of Veggie is to provide a self-sufficient and sustainable food source for astronauts as well as a means of recreation and relaxation through therapeutic gardening. Veggie was designed in conjunction with ORBITEC and is currently being used aboard the International Space Station, with another Veggie module planned to be delivered to the ISS in 2017.
Overview
Veggie is part of an overarching project concerning research on growing crops in zero gravity. Among the goals of this project are to learn about how plants grow in a weightless environment and to learn about how plants can efficiently be grown for crew use in space. Veggie was designed to be low maintenance, using low power and having a low launch mass. Thus, Veggie provides a minorly regulated environment with minimal control over the atmosphere and temperature of the module. The successor to the Veggie project is the Advanced Plant Habitat (APH), components of which will be delivered to the International Space Station during the Cygnus CRS OA-7 and SpaceX CRS-11 missions in 2017.
In 2018 the Veggie-3 experiment was tested with plant pillows and root mats. One of the goals is to grow food for crew consumption. Crops tested at this time include cabbage, lettuce, and mizuna.
Design
A Veggie module weighs less than and uses 90 watts. It consists of three parts: a lighting system, a bellows enclosure, and a reservoir. The lighting system regulates the amount and intensity of light plants receive, the bellows enclosure keeps the environment inside the unit separate from its surroundings, and the reservoir connects to plant pillows where the seeds grow.
Lighting system
Veggie's lighting system consists of three different types of coloreds LEDs: red, blue, and green. Each color corresponds to a different light intensity that the plants will receive. Although the lighting syst
Document 3:::
{{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete
Document 4:::
Excretion is a process in which metabolic waste
is eliminated from an organism. In vertebrates this is primarily carried out by the lungs, kidneys, and skin. This is in contrast with secretion, where the substance may have specific tasks after leaving the cell. Excretion is an essential process in all forms of life. For example, in mammals, urine is expelled through the urethra, which is part of the excretory system. In unicellular organisms, waste products are discharged directly through the surface of the cell.
During life activities such as cellular respiration, several chemical reactions take place in the body. These are known as metabolism. These chemical reactions produce waste products such as carbon dioxide, water, salts, urea and uric acid. Accumulation of these wastes beyond a level inside the body is harmful to the body. The excretory organs remove these wastes. This process of removal of metabolic waste from the body is known as excretion.
Green plants excrete carbon dioxide and water as respiratory products. In green plants, the carbon dioxide released during respiration gets used during photosynthesis. Oxygen is a by product generated during photosynthesis, and exits through stomata, root cell walls, and other routes. Plants can get rid of excess water by transpiration and guttation. It has been shown that the leaf acts as an 'excretophore' and, in addition to being a primary organ of photosynthesis, is also used as a method of excreting toxic wastes via diffusion. Other waste materials that are exuded by some plants — resin, saps, latex, etc. are forced from the interior of the plant by hydrostatic pressures inside the plant and by absorptive forces of plant cells. These latter processes do not need added energy, they act passively. However, during the pre-abscission phase, the metabolic levels of a leaf are high. Plants also excrete some waste substances into the soil around them.
In animals, the main excretory products are carbon dioxide, ammoni
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because plants are relatively immobile, they can function with bulky energy storage in the form of what?
A. starch
B. dioxide
C. fat
D. protein
Answer:
|
|
sciq-9746
|
multiple_choice
|
What is a mixture of eroded rock, minerals, partly decomposed organic matter, and other materials called?
|
[
"loam",
"sediment",
"sand",
"soil"
] |
D
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
Debris (, ) is rubble, wreckage, ruins, litter and discarded garbage/refuse/trash, scattered remains of something destroyed, or, as in geology, large rock fragments left by a melting glacier, etc. Depending on context, debris can refer to a number of different things. The first apparent use of the French word in English is in a 1701 description of the army of Prince Rupert upon its retreat from a battle with the army of Oliver Cromwell, in England.
Disaster
In disaster scenarios, tornadoes leave behind large pieces of houses and mass destruction overall. This debris also flies around the tornado itself when it is in progress. The tornado's winds capture debris it kicks up in its wind orbit, and spins it inside its vortex. The tornado's wind radius is larger than the funnel itself. Tsunamis and hurricanes also bring large amounts of debris, such as Hurricane Katrina in 2005 and Hurricane Sandy in 2012. Earthquakes rock cities to rubble debris.
Geological
In geology, debris usually applies to the remains of geological activity including landslides, volcanic explosions, avalanches, mudflows or Glacial lake outburst floods (Jökulhlaups) and moraine, lahars, and lava eruptions. Geological debris sometimes moves in a stream called a debris flow. When it accumulates at the base of hillsides, it can be called "talus" or "scree".
In mining, debris called attle usually consists of rock fragments which contain little or no ore.
Marine
Marine debris applies to floating garbage such as bottles, cans, styrofoam, cruise ship waste, offshore oil and gas exploration and production facilities pollution, and fishing paraphernalia from professional and recreational boaters. Marine debris is also called litter or flotsam and jetsam. Objects that can constitute marine debris include used automobile tires, detergent bottles, medical wastes, discarded fishing line and nets, soda cans, and bilge waste solids.
In addition to being unsightly, it can pose a serious threat to marine lif
Document 2:::
USDA soil taxonomy (ST) developed by the United States Department of Agriculture and the National Cooperative Soil Survey provides an elaborate classification of soil types according to several parameters (most commonly their properties) and in several levels: Order, Suborder, Great Group, Subgroup, Family, and Series. The classification was originally developed by Guy Donald Smith, former director of the U.S. Department of Agriculture's soil survey investigations.
Discussion
A taxonomy is an arrangement in a systematic manner; the USDA soil taxonomy has six levels of classification. They are, from most general to specific: order, suborder, great group, subgroup, family and series. Soil properties that can be measured quantitatively are used in this classification system – they include: depth, moisture, temperature, texture, structure, cation exchange capacity, base saturation, clay mineralogy, organic matter content and salt content. There are 12 soil orders (the top hierarchical level) in soil taxonomy. The names of the orders end with the suffix -sol. The criteria for the different soil orders include properties that reflect major differences in the genesis of soils. The orders are:
Alfisol – soils with aluminium and iron. They have horizons of clay accumulation, and form where there is enough moisture and warmth for at least three months of plant growth. They constitute 10% of soils worldwide.
Andisol – volcanic ash soils. They are young soils. They cover 1% of the world's ice-free surface.
Aridisol – dry soils forming under desert conditions which have fewer than 90 consecutive days of moisture during the growing season and are nonleached. They include nearly 12% of soils on Earth. Soil formation is slow, and accumulated organic matter is scarce. They may have subsurface zones of caliche or duripan. Many aridisols have well-developed Bt horizons showing clay movement from past periods of greater moisture.
Entisol – recently formed soils that lack well-d
Document 3:::
Soil classification deals with the systematic categorization of soils based on distinguishing characteristics as well as criteria that dictate choices in use.
Overview
Soil classification is a dynamic subject, from the structure of the system, to the definitions of classes, to the application in the field. Soil classification can be approached from the perspective of soil as a material and soil as a resource.
Inscriptions at the temple of Horus at Edfu outline a soil classification used by Tanen to determine what kind of temple to build at which site. Ancient Greek scholars produced a number of classification based on several different qualities of the soil.
Engineering
Geotechnical engineers classify soils according to their engineering properties as they relate to use for foundation support or building material. Modern engineering classification systems are designed to allow an easy transition from field observations to basic predictions of soil engineering properties and behaviors.
The most common engineering classification system for soils in North America is the Unified Soil Classification System (USCS). The USCS has three major classification groups: (1) coarse-grained soils (e.g. sands and gravels); (2) fine-grained soils (e.g. silts and clays); and (3) highly organic soils (referred to as "peat"). The USCS further subdivides the three major soil classes for clarification. It distinguishes sands from gravels by grain size, classifying some as "well-graded" and the rest as "poorly-graded". Silts and clays are distinguished by the soils' Atterberg limits, and thus the soils are separated into "high-plasticity" and "low-plasticity" soils. Moderately organic soils are considered subdivisions of silts and clays and are distinguished from inorganic soils by changes in their plasticity properties (and Atterberg limits) on drying. The European soil classification system (ISO 14688) is very similar, differing primarily in coding and in adding an "intermediate-p
Document 4:::
Mass wasting, also known as mass movement, is a general term for the movement of rock or soil down slopes under the force of gravity. It differs from other processes of erosion in that the debris transported by mass wasting is not entrained in a moving medium, such as water, wind, or ice. Types of mass wasting include creep, solifluction, rockfalls, debris flows, and landslides, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Jupiter's moon Io, and on many other bodies in the Solar System.
Subsidence is sometimes regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement.
Rapid mass wasting events, such as landslides, can be deadly and destructive. More gradual mass wasting, such as soil creep, poses challenges to civil engineering, as creep can deform roadways and structures and break pipelines. Mitigation methods include slope stabilization, construction of walls, catchment dams, or other structures to contain rockfall or debris flows, afforestation, or improved drainage of source areas.
Types
Mass wasting is a general term for any process of erosion that is driven by gravity and in which the transported soil and rock is not entrained in a moving medium, such as water, wind, or ice. The presence of water usually aids mass wasting, but the water is not abundant enough to be regarded as a transporting medium. Thus, the distinction between mass wasting and stream erosion lies between a mudflow (mass wasting) and a very muddy stream (stream erosion), without a sharp dividing line. Many forms of mass wasting are recognized, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years.
Based on how the soil, regolith or rock moves dow
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a mixture of eroded rock, minerals, partly decomposed organic matter, and other materials called?
A. loam
B. sediment
C. sand
D. soil
Answer:
|
|
sciq-9618
|
multiple_choice
|
What was the appendix used for in the past but is no longer needed for?
|
[
"digest food",
"produce food",
"sense danger",
"fight infection"
] |
A
|
Relavent Documents:
Document 0:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
Document 1:::
Hindgut fermentation is a digestive process seen in monogastric herbivores, animals with a simple, single-chambered stomach. Cellulose is digested with the aid of symbiotic bacteria. The microbial fermentation occurs in the digestive organs that follow the small intestine: the large intestine and cecum. Examples of hindgut fermenters include proboscideans and large odd-toed ungulates such as horses and rhinos, as well as small animals such as rodents, rabbits and koalas. In contrast, foregut fermentation is the form of cellulose digestion seen in ruminants such as cattle which have a four-chambered stomach, as well as in sloths, macropodids, some monkeys, and one bird, the hoatzin.
Cecum
Hindgut fermenters generally have a cecum and large intestine that are much larger and more complex than those of a foregut or midgut fermenter. Research on small cecum fermenters such as flying squirrels, rabbits and lemurs has revealed these mammals to have a GI tract about 10-13 times the length of their body. This is due to the high intake of fiber and other hard to digest compounds that are characteristic to the diet of monogastric herbivores. Unlike in foregut fermenters, the cecum is located after the stomach and small intestine in monogastric animals, which limits the amount of further digestion or absorption that can occur after the food is fermented.
Large intestine
In smaller hindgut fermenters of the order Lagomorpha (rabbits, hares, and pikas), cecotropes formed in the cecum are passed through the large intestine and subsequently reingested to allow another opportunity to absorb nutrients. Cecotropes are surrounded by a layer of mucus which protects them from stomach acid but which does not inhibit nutrient absorption in the small intestine. Coprophagy is also practiced by some rodents, such as the capybara, guinea pig and related species, and by the marsupial common ringtail possum. This process is also beneficial in allowing for restoration of the microflora pop
Document 2:::
The internal anal sphincter, IAS, (or sphincter ani internus) is a ring of smooth muscle that surrounds about 2.5–4.0 cm of the anal canal. It is about 5 mm thick, and is formed by an aggregation of the smooth (involuntary) circular muscle fibers of the rectum. it terminates distally about 6 mm from the anal orifice.
The internal anal sphincter aids the sphincter ani externus to occlude the anal aperture and aids in the expulsion of the feces. Its action is entirely involuntary. It is normally in a state of continuous maximal contraction to prevent leakage of faeces or gases. Sympathetic stimulation stimulates and maintains the sphincter's contraction, and parasympathetic stimulation inhibits it. It becomes relaxed in response to distention of the rectal ampulla, requiring voluntary contraction of the puborectalis and external anal sphincter to maintain continence.
Anatomy
The internal anal sphincter is the specialised thickened terminal portion of the inner circular layer of smooth muscle of the large intestine. It extends from the pectinate line (anorectal junction) proximally to just proximal to the anal orifice distally (the distal termination is palpable). Its muscle fibres are arranged in a spiral (rather than a circular) manner.
At its distal extremity, it is in contact with but separate from the external anal sphincter.
Innervation
The sphincter receives extrinsic autonomic innervation via the inferior hypogastric plexus, with sympathetic innervation derived from spinal levels L1-L2, and parasympathetic innervation derived from S2-S4.
The internal anal sphincter is not innervated by the pudendal nerve (which provides motor and sensory innervation to the external anal sphincter).
Function
The sphincter is contracted in its resting state, but reflexively relaxes in certain contexts (most notably during defecation).
Transient relaxation of its proximal portion occurs with rectal distension and post-prandial rectal contraction (the recto-anal inhibitory
Document 3:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
Document 4:::
Little gastrin I is a form of gastrin commonly called as gastrin-17. This is a protein hormone, secreted by the intestine.
Gastrin II has identical amino acid composition to Gastrin I, the only difference is that the single tyrosine residue is sulfated in Gastrin II.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the appendix used for in the past but is no longer needed for?
A. digest food
B. produce food
C. sense danger
D. fight infection
Answer:
|
|
sciq-4043
|
multiple_choice
|
Observations suggest that a force applied to an object is always applied by what?
|
[
"itself",
"gravity",
"dark matter",
"another object"
] |
D
|
Relavent Documents:
Document 0:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
Ancient Greece
Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Hong Kong
High schools
In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE).
Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi
Document 3:::
<noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that
Document 4:::
In physics, action at a distance is the concept that an object's motion can be affected by another object without being physically contact (as in mechanical contact) by the other object. That is, it is the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity lead to new action at a distance models providing alternative to field theories.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action at a distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, there is no medium required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other mode
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Observations suggest that a force applied to an object is always applied by what?
A. itself
B. gravity
C. dark matter
D. another object
Answer:
|
|
sciq-1853
|
multiple_choice
|
The cycle of copper reacting is a good example of what principle?
|
[
"law of inertia",
"conservation of momentum",
"conservation of mass",
"conservation of energy"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
<noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that
Document 3:::
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th
Document 4:::
Conceptual physics is an approach to teaching physics that focuses on the ideas of physics rather than the mathematics. It is believed that with a strong conceptual foundation in physics, students are better equipped to understand the equations and formulas of physics, and to make connections between the concepts of physics and their everyday life. Early versions used almost no equations or math-based problems.
Paul G. Hewitt popularized this approach with his textbook Conceptual Physics: A New Introduction to your Environment in 1971. In his review at the time, Kenneth W. Ford noted the emphasis on logical reasoning and said "Hewitt's excellent book can be called physics without equations, or physics without computation, but not physics without mathematics." Hewitt's wasn't the first book to take this approach. Conceptual Physics: Matter in Motion by Jae R. Ballif and William E. Dibble was published in 1969. But Hewitt's book became very successful. As of 2022, it is in its 13th edition. In 1987 Hewitt wrote a version for high school students.
The spread of the conceptual approach to teaching physics broadened the range of students taking physics in high school. Enrollment in conceptual physics courses in high school grew from 25,000 students in 1987 to over 400,000 in 2009. In 2009, 37% of students took high school physics, and 31% of them were in Physics First, conceptual physics courses, or regular physics courses using a conceptual textbook.
This approach to teaching physics has also inspired books for science literacy courses, such as From Atoms to Galaxies: A Conceptual Physics Approach to Scientific Awareness by Sadri Hassani.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The cycle of copper reacting is a good example of what principle?
A. law of inertia
B. conservation of momentum
C. conservation of mass
D. conservation of energy
Answer:
|
|
sciq-6327
|
multiple_choice
|
Along with other elements, most ores are made of what?
|
[
"crystals",
"coal",
"metal",
"sodium"
] |
C
|
Relavent Documents:
Document 0:::
See also
List of minerals
Document 1:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
Document 2:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
Document 3:::
Uranium mining around Bancroft, Ontario, was conducted at four sites, beginning in the early 1950s and concluding by 1982. Bancroft was one of two major uranium-producing areas in Ontario, and one of seven in Canada, all located along the edge of the Canadian Shield. In the context of mining, the "Bancroft area" includes Haliburton, Hastings, and Renfrew counties, and all areas between Minden and Lake Clear. Activity in the mid-1950s was described by engineer A. S. Bayne in a 1977 report as the "greatest uranium prospecting rush in the world".
As a result of activities at its four major uranium mines, Bancroft experienced rapid population and economic growth throughout the 1950s. By 1958, Canada had become one of the world's leading producers of uranium; the $274 million of uranium exports that year represented Canada's most significant mineral export. By 1963, the federal government had purchased more than $1.5 billion of uranium from Canadian producers, but soon thereafter the global supply uranium market collapsed and the government stopped issuing contracts to buy. Mining resumed when uranium prices rose during the 1970s energy crisis, but this second period of activity ended by 1982.
Three of the uranium mines are decommissioned, and one is undergoing rehabilitation. A twofold increase in lung cancer development and mortality has been observed among former mine workers. Bancroft continues to be known for gems and mineralogy.
Geology and mineralogy
During the most recent ice age, in the area of what is now Bancroft, Ontario, ancient glaciers removed soil and rock, exposing the Precambrian granite that had been the heart of volcanic mountains on an ancient sea bed. During the Grenville orogenies, sedimentary rocks were transformed by heat and pressure into banded gneiss and marble, incorporating gabbro and diorite (rich in iron and other dark minerals). Some uranium ores in these structures are about 1,000 million years old, while others are understood to be
Document 4:::
A solid solution, a term popularly used for metals, is a homogeneous mixture of two different kinds of atoms in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species.
In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite.
Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation.
Nomenclature
The IUPAC definition of a solid solution is a "solid in which components ar
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Along with other elements, most ores are made of what?
A. crystals
B. coal
C. metal
D. sodium
Answer:
|
|
sciq-2298
|
multiple_choice
|
What (nh3) is one of the few thermodynamically stable binary compounds of nitrogen with a nonmetal?
|
[
"acetic acid",
"nitrous oxide",
"ammonia",
"liquid nitrogen"
] |
C
|
Relavent Documents:
Document 0:::
Nitric oxide (nitrogen oxide or nitrogen monoxide) is a colorless gas with the formula . It is one of the principal oxides of nitrogen. Nitric oxide is a free radical: it has an unpaired electron, which is sometimes denoted by a dot in its chemical formula (•N=O or •NO). Nitric oxide is also a heteronuclear diatomic molecule, a class of molecules whose study spawned early modern theories of chemical bonding.
An important intermediate in industrial chemistry, nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. It was proclaimed the "Molecule of the Year" in 1992. The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule.
Nitric oxide should not be confused with nitrogen dioxide (NO2), a brown gas and major air pollutant, or with nitrous oxide (N2O), an anesthetic gas.
Physical properties
Electronic configuration
The ground state electronic configuration of NO is, in united atom notation:
The first two orbitals are actually pure atomic 1sO and 1sN from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron.
The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J= or J=.
Dipole
The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transf
Document 1:::
A carbon–nitrogen bond is a covalent bond between carbon and nitrogen and is one of the most abundant bonds in organic chemistry and biochemistry.
Nitrogen has five valence electrons and in simple amines it is trivalent, with the two remaining electrons forming a lone pair. Through that pair, nitrogen can form an additional bond to hydrogen making it tetravalent and with a positive charge in ammonium salts. Many nitrogen compounds can thus be potentially basic but its degree depends on the configuration: the nitrogen atom in amides is not basic due to delocalization of the lone pair into a double bond and in pyrrole the lone pair is part of an aromatic sextet.
Similar to carbon–carbon bonds, these bonds can form stable double bonds, as in imines; and triple bonds, such as nitriles. Bond lengths range from 147.9 pm for simple amines to 147.5 pm for C-N= compounds such as nitromethane to 135.2 pm for partial double bonds in pyridine to 115.8 pm for triple bonds as in nitriles.
A CN bond is strongly polarized towards nitrogen (the electronegativities of C and N are 2.55 and 3.04, respectively) and subsequently molecular dipole moments can be high: cyanamide 4.27 D, diazomethane 1.5 D, methyl azide 2.17, pyridine 2.19. For this reason many compounds containing CN bonds are water-soluble. N-philes are group of radical molecules which are specifically attracted to the C=N bonds.
Carbon-nitrogen bond can be analyzed by X-ray photoelectron spectroscopy (XPS). Depending on the bonding states the peak positions differ in N1s XPS spectra.
Nitrogen functional groups
See also
Cyanide
Other carbon bonds with group 15 elements: carbon–nitrogen bonds, carbon–phosphorus bonds
Other carbon bonds with period 2 elements: carbon–lithium bonds, carbon–beryllium bonds, carbon–boron bonds, carbon–carbon bonds, carbon–nitrogen bonds, carbon–oxygen bonds, carbon–fluorine bonds
Carbon–hydrogen bond
Document 2:::
The nitrite ion has the chemical formula . Nitrite (mostly sodium nitrite) is widely used throughout chemical and pharmaceutical industries. The nitrite anion is a pervasive intermediate in the nitrogen cycle in nature. The name nitrite also refers to organic compounds having the –ONO group, which are esters of nitrous acid.
Production
Sodium nitrite is made industrially by passing a mixture of nitrogen oxides into aqueous sodium hydroxide or sodium carbonate solution:
The product is purified by recrystallization. Alkali metal nitrites are thermally stable up to and beyond their melting point (441 °C for KNO2). Ammonium nitrite can be made from dinitrogen trioxide, N2O3, which is formally the anhydride of nitrous acid:
2 NH3 + H2O + N2O3 → 2 NH4NO2
Structure
The nitrite ion has a symmetrical structure (C2v symmetry), with both N–O bonds having equal length and a bond angle of about 115°. In valence bond theory, it is described as a resonance hybrid with equal contributions from two canonical forms that are mirror images of each other. In molecular orbital theory, there is a sigma bond between each oxygen atom and the nitrogen atom, and a delocalized pi bond made from the p orbitals on nitrogen and oxygen atoms which is perpendicular to the plane of the molecule. The negative charge of the ion is equally distributed on the two oxygen atoms. Both nitrogen and oxygen atoms carry a lone pair of electrons. Therefore, the nitrite ion is a Lewis base.
In the gas phase it exists predominantly as a trans-planar molecule.
Reactions
Acid-base properties
Nitrite is the conjugate base of the weak acid nitrous acid:
HNO2 H+ + ; pKa ≈ 3.3 at 18 °C
Nitrous acid is also highly volatile, tending to disproportionate:
3 HNO2 (aq) H3O+ + + 2 NO
This reaction is slow at 0 °C. Addition of acid to a solution of a nitrite in the presence of a reducing agent, such as iron(II), is a way to make nitric oxide (NO) in the laboratory.
Oxidation and reduction
The formal oxidation sta
Document 3:::
Nitrogen dioxide is a chemical compound with the formula and is one of several nitrogen oxides. is an intermediate in the industrial synthesis of nitric acid, millions of tons of which are produced each year for use (primarily in the production of fertilizers). At higher temperatures, nitrogen dioxide is a reddish-brown gas. It can be fatal if inhaled in large quantities. The LC50 (median lethal dose) for humans has been estimated to be 174 ppm for a 1-hour exposure. Nitrogen dioxide is a paramagnetic, bent molecule with C2v point group symmetry.
It is included in the NOx family of atmospheric pollutants.
Properties
Nitrogen dioxide is a reddish-brown gas with a pungent, acrid odor above and becomes a yellowish-brown liquid below . It forms an equilibrium with its dimer, dinitrogen tetroxide (), and converts almost entirely to below .
The bond length between the nitrogen atom and the oxygen atom is 119.7 pm. This bond length is consistent with a bond order between one and two.
Unlike ozone () the ground electronic state of nitrogen dioxide is a doublet state, since nitrogen has one unpaired electron, which decreases the alpha effect compared with nitrite and creates a weak bonding interaction with the oxygen lone pairs. The lone electron in also means that this compound is a free radical, so the formula for nitrogen dioxide is often written as .
The reddish-brown color is a consequence of preferential absorption of light in the blue region of the spectrum (400–500 nm), although the absorption extends throughout the visible (at shorter wavelengths) and into the infrared (at longer wavelengths). Absorption of light at wavelengths shorter than about 400 nm results in photolysis (to form , atomic oxygen); in the atmosphere the addition of the oxygen atom so formed to results in ozone.
Preparation
Nitrogen dioxide typically arises via the oxidation of nitric oxide by oxygen in air (e.g. as result of corona discharge):
+
Nitrogen dioxide is formed in m
Document 4:::
Reactive nitrogen ("Nr"), also known as fixed nitrogen, refers to all forms of nitrogen present in the environment except for molecular nitrogen (). While nitrogen is an essential element for life on Earth, molecular nitrogen is comparatively unreactive, and must be converted to other chemical forms via nitrogen fixation before it can be used for growth. Common Nr species include nitrogen oxides (), ammonia (), nitrous oxide (), as well as the anion nitrate ().
Biologically, nitrogen is "fixed" mainly by the microbes (eg., Bacteria and Archaea) of the soil that fix into mainly but also other species. Legumes, a type of plant in the Fabacae family, are symbionts to some of these microbes that fix . is a building block to Amino acids and proteins amongst other things essential for life. However, just over half of all reactive nitrogen entering the biosphere is attributable to anthropogenic activity such as industrial fertilizer production. While reactive nitrogen is eventually converted back into molecular nitrogen via denitrification, an excess of reactive nitrogen can lead to problems such as eutrophication in marine ecosystems.
Reactive nitrogen compounds
In the environmental context, reactive nitrogen compounds include the following classes:
oxide gases: nitric oxide, nitrogen dioxide, nitrous oxide. Containing oxidized nitrogen, mainly the result of industrial processes and internal combustion engines.
anions: nitrate, nitrite. Nitrate is a common component of fertilizers, e.g. ammonium nitrate.
amine derivatives: ammonia and ammonium salts, urea. Containing reduced nitrogen, these compounds are components of fertilizers.
All of these compounds enter into the nitrogen cycle.
As a consequence, an excess of Nr can affect the environment relatively quickly. This also means that nitrogen-related problems need to be looked at in an integrated manner.
See also
Human impact on the nitrogen cycle
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What (nh3) is one of the few thermodynamically stable binary compounds of nitrogen with a nonmetal?
A. acetic acid
B. nitrous oxide
C. ammonia
D. liquid nitrogen
Answer:
|
|
sciq-7769
|
multiple_choice
|
A highway that switches back and forth as it climbs up a steep hillside, yeilding a much gentler slope, is an example of what simple machine?
|
[
"inclined plane",
"wheel",
"lever",
"pulley"
] |
A
|
Relavent Documents:
Document 0:::
Machine element or hardware refers to an elementary component of a machine. These elements consist of three basic types:
structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants,
mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and
control components such as buttons, switches, indicators, sensors, actuators and computer controllers.
While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Machine elements are basic mechanical parts and features used as the building blocks of most machines. Most are standardized to common sizes, but customs are also common for specialized applications.
Machine elements may be features of a part (such as screw threads or integral plain bearings) or they may be discrete parts in and of themselves such as wheels, axles, pulleys, rolling-element bearings, or gears. All of the simple machines may be described as machine elements, and many machine elements incorporate concepts of one or more simple machines. For example, a leadscrew incorporates a screw thread, which is an inclined plane wrapped around a cylinder.
Many mechanical design, invention, and engineering tasks involve a knowledge of various machine elements and an intelligent and creative combining of these elements into a component or assembly that fills a need (serves an application).
Structural elements
Beams,
Struts,
Bearings,
Fasteners
Keys,
Splines,
Cotter pin,
Seals
Machine guardings
Mechanical elements
Engine,
Electric motor,
Actuator,
Shafts,
Couplings
Belt,
Chain,
Cable drives,
Gear train,
Clutch,
Brake,
Flywheel,
Cam,
follower systems,
Linkage,
Simple machine
Types
Shafts
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Types of mill include the following:
Manufacturing facilities
Categorized by power source
Watermill, a mill powered by moving water
Windmill, a mill powered by moving air (wind)
Tide mill, a water mill that uses the tide's movement
Treadmill or treadwheel, a mill powered by human or animal movement
Horse mill, a mill powered by horses' movement
Categorized by not being a fixed building
Ship mill, a water mill that floats on the river or bay whose current or tide provides the water movement
Field mill (carriage), a portable mill
Categorized by what is made and/or acted on
Materials recovery facility, processes raw garbage and turns it into purified commodities like aluminum, PET, and cardboard by processing and crushing (compressing and baling) it.
Rice mill, processes paddy to rice
Bark mill, produces tanbark for tanneries
Coffee mill
Colloid mill
Cider mill, crushes apples to give cider
Drainage mills such as the Clayrack Drainage Mill are used to pump water from low-lying land.
Flotation mill, in mining, uses grinding and froth flotation to concentrate ores using differences in materials' hydrophobicity
Gristmill, a grain mill (flour mill)
Herb grinder
Oil mill, see expeller pressing, extrusion
Ore mill, for crushing and processing ore
Paper mill
Pellet mill
Powder mill, produces gunpowder
Puppy mill, a breeding facility that produces puppies on a large scale, where the welfare of the dogs is jeopardized for profits
Rock crusher
Sugar cane mill
Sawmill, a lumber mill
Millwork
starch mill
Steel mill
sugar mill (also called a sugar refinery), processes sugar beets or sugar cane into various finished products
Textile mills for textile manufacturing:
Cotton mill
Flax mill, for flax
Silk mill, for silk
woollen mill, see textile manufacturing
huller (also called a rice mill, or rice husker) is used to hull rice
Wire mill, for wire drawing
Other types
See :Category:Industrial buildings and structures
Industrial tools for size re
Document 3:::
A gristmill (also: grist mill, corn mill, flour mill, feed mill or feedmill) grinds cereal grain into flour and middlings. The term can refer to either the grinding mechanism or the building that holds it. Grist is grain that has been separated from its chaff in preparation for grinding.
History
Early history
The Greek geographer Strabo reports in his Geography a water-powered grain-mill to have existed near the palace of king Mithradates VI Eupator at Cabira, Asia Minor, before 71 BC.
The early mills had horizontal paddle wheels, an arrangement which later became known as the "Norse wheel", as many were found in Scandinavia. The paddle wheel was attached to a shaft which was, in turn, attached to the centre of the millstone called the "runner stone". The turning force produced by the water on the paddles was transferred directly to the runner stone, causing it to grind against a stationary "bed", a stone of a similar size and shape. This simple arrangement required no gears, but had the disadvantage that the speed of rotation of the stone was dependent on the volume and flow of water available and was, therefore, only suitable for use in mountainous regions with fast-flowing streams. This dependence on the volume and speed of flow of the water also meant that the speed of rotation of the stone was highly variable and the optimum grinding speed could not always be maintained.
Vertical wheels were in use in the Roman Empire by the end of the first century BC, and these were described by Vitruvius. The rotating mill is considered "one of the greatest discoveries of the human race". It was a very physically demanding job for workers, where the slave workers were considered little different from animals, the miseries of which were depicted in iconography and Apuleius' The Golden Ass. The peak of Roman technology is probably the Barbegal aqueduct and mill where water with a 19-metre fall drove sixteen water wheels, giving a grinding capacity estimated at 28 tons per
Document 4:::
Mechanical engineering is a discipline centered around the concept of using force multipliers, moving components, and machines. It utilizes knowledge of mathematics, physics, materials sciences, and engineering technologies. It is one of the oldest and broadest of the engineering disciplines.
Dawn of civilization to early middle ages
Engineering arose in early civilization as a general discipline for the creation of large scale structures such as irrigation, architecture, and military projects. Advances in food production through irrigation allowed a portion of the population to become specialists in Ancient Babylon.
All six of the classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC, and then in ancient Egyptian technology circa 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991-1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza.
The Assyrians were notable in their use of metallurgy and incorporation of iron weapons. Many of their advancements were in military equipment. They were not the first to develop them, but did make advancements on the wheel and the chariot. They made use of pivot-able axl
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A highway that switches back and forth as it climbs up a steep hillside, yeilding a much gentler slope, is an example of what simple machine?
A. inclined plane
B. wheel
C. lever
D. pulley
Answer:
|
|
sciq-6412
|
multiple_choice
|
What are chlamydia, gonorrhea, and syphilis an example of?
|
[
"bacterial stis",
"metabolic disorders",
"viral stis",
"genetic diseases"
] |
A
|
Relavent Documents:
Document 0:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
Document 1:::
Infectious diseases or ID, also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates the cause of a disease to determine what kind of Bacteria, viruses, parasites, or fungi the disease is caused by. Once the pathogen is known, an ID specialist can then run various tests to determine the best antimicrobial drug to kill the pathogen and treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines.
Scope
Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin.
Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS.
History
Inf
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 4:::
Those involved in the care of athletes should be alert to the possibility of getting an infectious disease for the following reasons:
There is the chance, or even the expectation, of contact or collision with another player, or the playing surface, which may be a mat or artificial turf.
The opportunities for skin breaks, obvious or subtle, are present and compromise skin defenses.
Young people congregate in dormitories, locker rooms, showers, etc.
There is the possibility of sharing personal toilet articles.
Equipment, gloves and pads and protective gear, is difficult to sanitize and can become contaminated.
However, in many cases, the chance of infection can be reduced by relatively simple measures.
Herpes gladiatorum
Wrestlers use mats which are abrasive and the potential for a true contagion (Latin contagion-, contagio, from contingere to have contact with) is very real. The herpes simplex virus, type I, is very infectious and large outbreaks have been documented. A major epidemic threatened the 2007 Minnesota high school wrestling season, but was largely contained by instituting an eight-day isolation period during which time competition was suspended. Practices, such as 'weight cutting', which can at least theoretically reduce immunity, might potentiate the risk. In non-epidemic circumstances, herpes gladiatorum affects about 3% of high school wrestlers and 8% of collegiate wrestlers. There is the potential for prevention of infection, or at least containment, with antiviral agents which are effective in reducing the spread to other athletes when given to those who are herpes positive, or who have recurrent herpes gladiatorum.
The NCAA specifies that a wrestler must:
- be free of systemic symptoms (fever, malaise, etc.).
- have developed no new blisters for 72 hours before the examination.
- have no moist lesions; all lesions must be dried and have progressed to a FIRM ADHERENT CRUST.
- have been on appropriate systemic antiviral therapy for at lea
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are chlamydia, gonorrhea, and syphilis an example of?
A. bacterial stis
B. metabolic disorders
C. viral stis
D. genetic diseases
Answer:
|
|
sciq-2824
|
multiple_choice
|
When both momentum and kinetic energy are conserved in a closed system, the collision is called what?
|
[
"a spontaneous collision",
"an elastic collision",
"an accidental collision",
"a static collision"
] |
B
|
Relavent Documents:
Document 0:::
In the context of classical mechanics simulations and physics engines employed within video games, collision response deals with models and algorithms for simulating the changes in the motion of two solid bodies following collision and other forms of contact.
Rigid body contact
Two rigid bodies in unconstrained motion, potentially under the action of forces, may be modelled by solving their equations of motion using numerical integration techniques. On collision, the kinetic properties of two such bodies seem to undergo an instantaneous change, typically resulting in the bodies rebounding away from each other, sliding, or settling into relative static contact, depending on the elasticity of the materials and the configuration of the collision.
Contact forces
The origin of the rebound phenomenon, or reaction, may be traced to the behaviour of real bodies that, unlike their perfectly rigid idealised counterparts, do undergo minor compression on collision, followed by expansion, prior to separation. The compression phase converts the kinetic energy of the bodies into potential energy and to an extent, heat. The expansion phase converts the potential energy back to kinetic energy.
During the compression and expansion phases of two colliding bodies, each body generates reactive forces on the other at the points of contact, such that the sum reaction forces of one body are equal in magnitude but opposite in direction to the forces of the other, as per the Newtonian principle of action and reaction. If the effects of friction are ignored, a collision is seen as affecting only the component of the velocities that are directed along the contact normal and as leaving the tangential components unaffected
Reaction
The degree of relative kinetic energy retained after a collision, termed the restitution, is dependent on the elasticity of the bodies‟ materials. The coefficient of restitution between two given materials is modeled as the ratio of the relative post-collis
Document 1:::
The coefficient of restitution (COR, also denoted by e), is the ratio of the final to initial relative speed between two objects after they collide. It normally ranges from 0 to 1 where 1 would be a perfectly elastic collision. A perfectly inelastic collision has a coefficient of 0, but a 0 value does not have to be perfectly inelastic. It is measured in the Leeb rebound hardness test, expressed as 1000 times the COR, but it is only a valid COR for the test, not as a universal COR for the material being tested.
The value is almost always less than 1 due to initial translational kinetic energy being lost to rotational kinetic energy, plastic deformation, and heat. It can be more than 1 if there is an energy gain during the collision from a chemical reaction, a reduction in rotational energy, or another internal energy decrease that contributes to the post-collision velocity.
The mathematics were developed by Sir Isaac Newton in 1687. It is also known as Newton's experimental law.
Further details
Line of impact – It is the line along which e is defined or in absence of tangential reaction force between colliding surfaces, force of impact is shared along this line between bodies. During physical contact between bodies during impact its line along common normal to pair of surfaces in contact of colliding bodies. Hence e is defined as a dimensionless one-dimensional parameter.
Range of values for e – treated as a constant
e is usually a positive, real number between 0 and 1:
e = 0: This is a perfectly inelastic collision.
0 < e < 1: This is a real-world inelastic collision, in which some kinetic energy is dissipated.
e = 1: This is a perfectly elastic collision, in which no kinetic energy is dissipated, and the objects rebound from one another with the same relative speed with which they approached.e < 0: A COR less than zero would represent a collision in which the separation velocity of the objects has the same direction (sign) as the closing velocity, implyi
Document 2:::
In physics, an elastic collision is an encounter (collision) between two bodies in which the total kinetic energy of the two bodies remains the same. In an ideal, perfectly elastic collision, there is no net conversion of kinetic energy into other forms such as heat, noise, or potential energy.
During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive or attractive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute).
Collisions of atoms are elastic, for example Rutherford backscattering.
A useful special case of elastic collision is when the two bodies have equal mass, in which case they will simply exchange their momenta.
The molecules—as distinct from atoms—of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules’ translational motion and their internal degrees of freedom with each collision. At any instant, half the collisions are, to a varying extent, inelastic collisions (the pair possesses less kinetic energy in their translational motions after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across the entire sample, molecular collisions can be regarded as essentially elastic as long as Planck's law forbids energy from being carried away by black-body photons.
In the case of macroscopic bodies, perfectly elastic collisions are an ideal never fully realized, but approximated by the interactions of objects such as billiard balls.
When considering energies, possible rotational energy before and/or after a collision may also play a role.
Equations
One-dimensional Ne
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
In physics, deflection is a change in a moving object's velocity, hence its trajectory, as a consequence of contact (collision) with a surface or the influence of a non-contact force field. Examples of the former include a ball bouncing off the ground or a bat; examples of the latter include a beam of electrons used to produce a picture, or the relativistic bending of light due to gravity.
Deflective efficiency
An object's deflective efficiency can never equal or surpass 100%, for example:
a mirror will never reflect exactly the same amount of light cast upon it, though it may concentrate the light which is reflected into a narrower beam.
on hitting the ground, a ball previously in free-fall (meaning no force other than gravity acted upon it) will never bounce back up to the place where it first started to descend.
This transfer of some energy into heat or other radiation is a consequence of the theory of thermodynamics, where, for every such interaction, some energy must be converted into alternative forms of energy or is absorbed by the deformation of the objects involved in the collision.
See also
Electrostatic deflection
Coriolis effect
Deflection yoke
Impulse
Reflection
Scattering
Collision
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When both momentum and kinetic energy are conserved in a closed system, the collision is called what?
A. a spontaneous collision
B. an elastic collision
C. an accidental collision
D. a static collision
Answer:
|
|
ai2_arc-576
|
multiple_choice
|
Which of the following is an example of an escape strategy that is used to avoid being killed and eaten by predators?
|
[
"Deer shed their antlers in the fall.",
"Newts drop their tails when threatened.",
"Anglerfish produce light to attract other fish.",
"Otters produce oil to coat their fur and make it waterproof."
] |
B
|
Relavent Documents:
Document 0:::
Escape response, escape reaction, or escape behavior is a mechanism by which animals avoid potential predation. It consists of a rapid sequence of movements, or lack of movement, that position the animal in such a way that allows it to hide, freeze, or flee from the supposed predator. Often, an animal's escape response is representative of an instinctual defensive mechanism, though there is evidence that these escape responses may be learned or influenced by experience.
The classical escape response follows this generalized, conceptual timeline: threat detection, escape initiation, escape execution, and escape termination or conclusion. Threat detection notifies an animal to a potential predator or otherwise dangerous stimulus, which provokes escape initiation, through neural reflexes or more coordinated cognitive processes. Escape execution refers to the movement or series of movements that will hide the animal from the threat or will allow for the animal to flee. Once the animal has effectively avoided the predator or threat, the escape response is terminated. Upon completion of the escape behavior or response, the animal may integrate the experience with its memory, allowing it to learn and adapt its escape response.
Escape responses are anti-predator behaviour that can vary from species to species. The behaviors themselves differ depending upon the species, but may include camouflaging techniques, freezing, or some form of fleeing (jumping, flying, withdrawal, etc.). In fact, variation between individuals is linked to increased survival. In addition, it is not merely increased speed that contributes to the success of the escape response; other factors, including reaction time and the individual's context can play a role. The individual escape response of a particular animal can vary based on an animal's previous experiences and its current state.
Evolutionary importance
The ability to perform an effective escape maneuver directly affects the fitness of the
Document 1:::
Animals have many different tactics for defending themselves, depending on the severity of the threat they are encountering. Stages of threat vary along a spectrum referred to as the "predatory imminence continuum", spanning from low-risk (pre-encounter) to high-risk (interaction) threats. The main assumption of the predatory imminence continuum is that as threat levels increase, defensive response strategies change. During the pre-encounter period, an animal may engage in activities like exploration or foraging. But if the animal senses that a predator is nearby, the animal may begin to express species specific defense reactions such as freezing in an attempt to avoid detection by the predator. However, in situations where a threat is imminent, once the animal is detected by its predator, freezing may no longer be the optimal behaviour for survival. At this point, the animal enters the circa-strike phase, where its behaviour will transition from passive freezing to active flight, or even attack if escape is not possible.
Development
The development of the predatory imminence continuum began with the description of species-specific defence reactions. Species-specific defence reactions are innate responses demonstrated by an animal when they experience a threat. Since survival behaviours are so vital for an animal to acquire and demonstrate rapidly, it has been theorized that these defence reactions would not have time to be learned and therefore, must be innate. While these behaviours are species-specific, there are three general categories of defence reactions - fleeing, freezing, and threatening. Species-specific defence reactions are now recognized as being organized in a hierarchical system where different behaviours are exhibited, depending on the level of threat experienced. However, when this concept was first proposed, the dominant species-specific defence reaction in a certain context was thought to be controlled by operant conditioning. That is, if a spe
Document 2:::
In ecology, hunting success is the proportion of hunts initiated by a predatory organism that end in success. Hunting success is determined by a number of factors such as the features of the predator, timing, different age classes, conditions for hunting, experience, and physical capabilities. Predators selectivity target certain categories of prey, in particular prey of a certain size. Prey animals that are in poor health are targeted and this contributes to the predator's hunting success. Different predation strategies can also contribute to hunting success, for example, hunting in groups gives predators an advantage over a solitary predator, and pack hunters like lions can kill animals that are too powerful for a solitary predator to overcome, like a megaherbivore.
Similar to hunting success, kill rates are the number of animals an individual predator kills per time unit. Hunting success rate focuses on the percentage of successful hunts. Hunting success is also measured in humans, but due to their unnaturally high hunting success, human hunters can have a big effect on prey population and behaviour, especially in areas lacking natural predators, recreational hunting can have inferences for wildlife populations. Humans display a great variety of hunting methods, numbering up to 24 hunting methods. There are also many types of hunting such as whaling, trophy hunting, big game hunting, fowling, poaching, pest control, etc.
Definition
Predators may actively seek out prey, if the predator spots its preferred target it would decide whether to attack or continue searching, and success ultimately depends on a number of factors. Predators may deploy a variety of hunting methods such as ambush, ballistic interception, pack hunting or pursuit predation. Hunting success is used to measure a predator's success rate against a species of prey or against all prey species in its diet, for example in the Mweya area of Queen Elizabeth National Park, lions had a hunting success
Document 3:::
Ambush predators or sit-and-wait predators are carnivorous animals that capture or trap prey via stealth, luring or by (typically instinctive) strategies utilizing an element of surprise. Unlike pursuit predators, who chase to capture prey using sheer speed or endurance, ambush predators avoid fatigue by staying in concealment, waiting patiently for the prey to get near, before launching a sudden overwhelming attack that quickly incapacitates and captures the prey.
The ambush is often opportunistic, and may be set by hiding in a burrow, by camouflage, by aggressive mimicry, or by the use of a trap (e.g. a web). The predator then uses a combination of senses to detect and assess the prey, and to time the strike. Nocturnal ambush predators such as cats and snakes have vertical slit pupils helping them to judge the distance to prey in dim light. Different ambush predators use a variety of means to capture their prey, from the long sticky tongues of chameleons to the expanding mouths of frogfishes.
Ambush predation is widely distributed in the animal kingdom, spanning some members of numerous groups such as the starfish, cephalopods, crustaceans, spiders, insects such as mantises, and vertebrates such as many snakes and fishes.
Strategy
Ambush predators usually remain motionless (sometimes hidden) and wait for prey to come within ambush distance before pouncing. Ambush predators are often camouflaged, and may be solitary. Pursuit predation becomes a better strategy than ambush predation when the predator is faster than the prey. Ambush predators use many intermediate strategies. For example, when a pursuit predator is faster than its prey over a short distance, but not in a long chase, then either stalking or ambush becomes necessary as part of the strategy.
Bringing the prey within range
Concealment
Ambush often relies on concealment, whether by staying out of sight or by means of camouflage.
Burrows
Ambush predators such as trapdoor spiders and Australian
Document 4:::
Agonism is a broad term which encompasses many behaviours that result from, or are triggered by biological conflict between competing organisms. Approximately 23 shark species are capable of producing such displays when threatened by intraspecific or interspecific competitors, as an evolutionary strategy to avoid unnecessary combat. The behavioural, postural, social and kinetic elements which comprise this complex, ritualized display can be easily distinguished from normal, or non-display behaviour, considered typical of that species' life history. The display itself confers pertinent information to the foe regarding the displayer's physical fitness, body size, inborn biological weaponry, confidence and determination to fight. This behaviour is advantageous because it is much less biologically taxing for an individual to display its intention to fight than the injuries it would sustain during conflict, which is why agonistic displays have been reinforced through evolutionary time, as an adaptation to personal fitness. Agonistic displays are essential to the social dynamics of many biological taxa, extending far beyond sharks.
Characteristics
Definition
Agonistic displays are ritualized sequences of actions, produced by animals belonging to almost all biological taxa, in response to conflict with other organisms. If challenged or threatened, animals may employ a suite of adaptive behaviours, which are used to reinforce the chances of their own survival. Behaviours which arise from agonistic conflict include:
fight or flight response
threat display to warn competitors and signal honest intentions
defence behaviour
simulated paralysis
avoidance behaviour
withdrawal
settling behaviour.
Each of these listed strategies constitute some manifestation of agonistic behaviour, and have been observed in numerous shark species, among many higher taxa in Kingdom Animalia. Displays of this nature are influenced and reinforced by natural selection, as an optimal strategy for
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following is an example of an escape strategy that is used to avoid being killed and eaten by predators?
A. Deer shed their antlers in the fall.
B. Newts drop their tails when threatened.
C. Anglerfish produce light to attract other fish.
D. Otters produce oil to coat their fur and make it waterproof.
Answer:
|
|
sciq-6170
|
multiple_choice
|
A short reflex is completely what and only involves the local integration of sensory input with motor output?
|
[
"physiological",
"peripheral",
"central",
"neuronal"
] |
B
|
Relavent Documents:
Document 0:::
The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.
The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system.
Stimulus
Organisms need information to solve at least three kinds of problems: (a) to maintain an appropriate environment, i.e., homeostasis; (b) to time activities (e.g., seasonal changes in behavior) or synchronize activities with those of conspecifics; and (c) to locate and respond to resources or threats (e.g., by moving towards resources or evading or attacking threats). Organisms also need to transmit information in order to influence another's behavior: to identify themselves, warn conspecifics of danger, coordinate activities, or deceive.
Sensory systems code for four aspects of a stimulus; type (modality), intensity, location, and duration. Arrival time of a sound pulse and phase differences of continuous sound are used for sound localization. Certain receptors are sensitive to certain types of stimuli (for example, different mechanoreceptors respond best to different kinds of touch stimuli, like sharp or blunt objects). Receptors send impulses in certain patterns to send information about the intensity of a stimul
Document 1:::
In biology, a reflex, or reflex action, is an involuntary, unplanned sequence or action and nearly instantaneous response to a stimulus.
Reflexes are found with varying levels of complexity in organisms with a nervous system. A reflex occurs via neural pathways in the nervous system called reflex arcs. A stimulus initiates a neural signal, which is carried to a synapse. The signal is then transferred across the synapse to a motor neuron, which evokes a target response. These neural signals do not always travel to the brain, so many reflexes are an automatic response to a stimulus that does not receive or need conscious thought.
Many reflexes are fine-tuned to increase organism survival and self-defense. This is observed in reflexes such as the startle reflex, which provides an automatic response to an unexpected stimulus, and the feline righting reflex, which reorients a cat's body when falling to ensure safe landing. The simplest type of reflex, a short-latency reflex, has a single synapse, or junction, in the signaling pathway. Long-latency reflexes produce nerve signals that are transduced across multiple synapses before generating the reflex response.
Types of human reflexes
Myotatic reflexes
The myotatic or muscle stretch reflexes (sometimes known as deep tendon reflexes) provide information on the integrity of the central nervous system and peripheral nervous system. This information can be detected using electromyography (EMG). Generally, decreased reflexes indicate a peripheral problem, and lively or exaggerated reflexes a central one. A stretch reflex is the contraction of a muscle in response to its lengthwise stretch.
Biceps reflex (C5, C6)
Brachioradialis reflex (C5, C6, C7)
Extensor digitorum reflex (C6, C7)
Triceps reflex (C6, C7, C8)
Patellar reflex or knee-jerk reflex (L2, L3, L4)
Ankle jerk reflex (Achilles reflex) (S1, S2)
While the reflexes above are stimulated mechanically, the term H-reflex refers to the analogous reflex stimulated
Document 2:::
Reflexogenous (reflexogenic) zone (or the receptive field of a reflex) is the area of the body stimulation of which causes a definite unconditioned reflex. For example, stimulation of the mucosa of the nasopharynx elicits a sneezing reflex, and stimulation of the tracheae and bronchi elicits a coughing reflex. The receptive fields of various reflexes may overlap, and in consequence a stimulus applied to a certain part of the skin can elicit one reflex or another depending on its strength and the state of the central nervous system.
Document 3:::
In physiology, an efference copy or efferent copy is an internal copy of an outflowing (efferent), movement-producing signal generated by an organism's motor system. It can be collated with the (reafferent) sensory input that results from the agent's movement, enabling a comparison of actual movement with desired movement, and a shielding of perception from particular self-induced effects on the sensory input to achieve perceptual stability. Together with internal models, efference copies can serve to enable the brain to predict the effects of an action.
An equal term with a different history is corollary discharge.
Efference copies are important in enabling motor adaptation such as to enhance gaze stability. They have a role in the perception of self and nonself electric fields in electric fish. They also underlie the phenomenon of tickling.
Motor control
Motor signals
A motor signal from the central nervous system (CNS) to the periphery is called an efference, and a copy of this signal is called an efference copy. Sensory information coming from sensory receptors in the peripheral nervous system to the central nervous system is called afference. On a similar basis, nerves into the nervous system are afferent nerves and ones out are termed efferent nerves.
When an efferent signal is produced and sent to the motor system, it has been suggested that a copy of the signal, known as an efference copy, is created so that exafference (sensory signals generated from external stimuli in the environment) can be distinguished from reafference (sensory signals resulting from an animal's own actions).
This efference copy, by providing the input to a forward internal model, is then used to generate the predicted sensory feedback that estimates the sensory consequences of a motor command. The actual sensory consequences of the motor command are then deployed to compare with the corollary discharge to inform the CNS about how well the expected action matched its actual exter
Document 4:::
Sensory neuroscience is a subfield of neuroscience which explores the anatomy and physiology of neurons that are part of sensory systems such as vision, hearing, and olfaction. Neurons in sensory regions of the brain respond to stimuli by firing one or more nerve impulses (action potentials) following stimulus presentation. How is information about the outside world encoded by the rate, timing, and pattern of action potentials? This so-called neural code is currently poorly understood and sensory neuroscience plays an important role in the attempt to decipher it. Looking at early sensory processing is advantageous since brain regions that are "higher up" (e.g. those involved in memory or emotion) contain neurons which encode more abstract representations. However, the hope is that there are unifying principles which govern how the brain encodes and processes information. Studying sensory systems is an important stepping stone in our understanding of brain function in general.
Typical experiments
A typical experiment in sensory neuroscience involves the presentation of a series of relevant stimuli to an experimental subject while the subject's brain is being monitored. This monitoring can be accomplished by noninvasive means such as functional magnetic resonance imaging (fMRI) or electroencephalography (EEG), or by more invasive means such as electrophysiology, the use of electrodes to record the electrical activity of single neurons or groups of neurons. fMRI measures changes in blood flow which related to the level of neural activity and provides low spatial and temporal resolution, but does provide data from the whole brain. In contrast,
Electrophysiology provides very high temporal resolution (the shapes of single spikes can be resolved) and data can be obtained from single cells. This is important since computations are performed within the dendrites of individual neurons.
Single neuron experiments
In most of the central nervous system, neurons communicate ex
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A short reflex is completely what and only involves the local integration of sensory input with motor output?
A. physiological
B. peripheral
C. central
D. neuronal
Answer:
|
|
sciq-675
|
multiple_choice
|
Humans are unique in their ability to alter their environment with the conscious purpose of increasing what, which acts as a limiting factor on populations in general?
|
[
"carrying capacity",
"density dependent limitation",
"niche",
"containing capacity"
] |
A
|
Relavent Documents:
Document 0:::
The carrying capacity of an environment is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the environment's maximal load, which in population ecology corresponds to the population equilibrium, when the number of deaths in a population equals the number of births (as well as immigration and emigration). The effect of carrying capacity on population dynamics is modelled with a logistic function. Carrying capacity is applied to the maximum population an environment can support in ecology, agriculture and fisheries. The term carrying capacity has been applied to a few different processes in the past before finally being applied to population limits in the 1950s. The notion of carrying capacity for humans is covered by the notion of sustainable population.
At the global scale, scientific data indicates that humans are living beyond the carrying capacity of planet Earth and that this cannot continue indefinitely. This scientific evidence comes from many sources worldwide. It was presented in detail in the Millennium Ecosystem Assessment of 2005, a collaborative effort involving more than 1,360 experts worldwide. More recent, detailed accounts are provided by ecological footprint accounting, and interdisciplinary research on planetary boundaries to safe human use of the biosphere. The Sixth Assessment Report on Climate Change from the IPCC and the First Assessment Report on Biodiversity and Ecosystem Services by the IPBES, large international summaries of the state of scientific knowledge regarding climate disruption and biodiversity loss, also support this view.
An early detailed examination of global limits was published in the 1972 book Limits to Growth, which has prompted follow-up commentary and analysis. A 2012 review in Nature by 22 international researchers expressed concerns that the Earth may be "approaching
Document 1:::
A limiting factor is a variable of a system that causes a noticeable change in output or another measure of a type of system. The limiting factor is in a pyramid shape of organisms going up from the producers to consumers and so on. A factor not limiting over a certain domain of starting conditions may yet be limiting over another domain of starting conditions, including that of the factor.
Overview
The identification of a factor as limiting is possible only in distinction to one or more other factors that are non-limiting. Disciplines differ in their use of the term as to whether they allow the simultaneous existence of more than one limiting factor which (may then be called "co-limiting"), but they all require the existence of at least one non-limiting factor when the terms are used. There are several different possible scenarios of limitation when more than one factor is present. The first scenario, called single limitation occurs when only one factor, the one with maximum demand, limits the System. Serial co-limitation is when one factor has no direct limiting effects on the system, but must be present to increase the limitation of a second factor. A third scenario, independent limitation, occurs when two factors both have limiting effects on the system but work through different mechanisms. Another scenario, synergistic limitation, occurs when both factors contribute to the same limitation mechanism, but in different ways.
In 1905 Frederick Blackman articulated the role of limiting factors as follows: "When a process is conditioned as to its rapidity by several separate factors the rate of the process is limited by the pace of the slowest factor." In terms of the magnitude of a function, he wrote, "When the magnitude of a function is limited by one of a set of possible factors, increase of that factor, and of that one alone, will be found to bring about an increase of the magnitude of the function."
Ecology
In population ecology, a regulating factor, al
Document 2:::
Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science.
Definition
The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability".
Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include:
Variability: Many of the Earth System's natural 'modes' and variab
Document 3:::
Ecological competence is a term that has several different meanings that are dependent on the context it is used. The term "Ecological competence" can be used in a microbial sense, and it can be used in a sociological sense.
Microbiology
Ecological competence is the ability of an organism, often a pathogen, to survive and compete in new habitats. In the case of plant pathogens, it is also their ability to survive between growing seasons. For example, peanut clump virus can survive in the spores of its fungal vector until a new growing season begins and it can proceed to infect its primary host again. If a pathogen does not have ecological competence it is likely to become extinct. Bacteria and other pathogens can increase their ecological competence by creating a micro-niche, or a highly specialized environment that only they can survive in. This in turn will increase plasmid stability. Increased plasmid stability leads to a higher ecological competence due to added spatial organization and regulated cell protection.
Sociology
Ecological competence in a sociological sense is based around the relationship that humans have formed with the environment. It is often important in certain careers that will have a drastic impact on the surrounding ecosystem. A specific example is engineers working around and planning mining operations, due to the possible negative effects it can have on the surrounding environment. Ecological competence is especially important at the managerial level so that managers may understand society's risk to nature. These risks are learned through specific ecological knowledge so that the environment can be better protected in the future.
See also
Cultural ecology
Environmental education
Sustainable development
Ecological relationship
Document 4:::
Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified upon their availability — they are classified into renewable and non-renewable resources. They can also be classified as actual and potential on the basis of the level of development and use, on the basis of origin they can be classified as biotic and abiotic, and on the basis of their distribution, as ubiquitous and localised (private, community-owned, national and international resources). An item becomes a resource with time and developing technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well-being. From a human perspective, a natural resource is anything obtained from the environment to satisfy human needs and wants. From a broader biological or ecological perspective, a resource satisfies the needs of a living organism (see biological resource).
The concept of resources has been developed across many established areas of work, in economics, biology and ecology, computer science, management, and human resources for example - linked to the concepts of competition, sustainability, conservation, and stewardship. In application within human society, commercial or non-commercial factors require resource allocation through resource management.
The concept of a resource can also be tied to the direction of leadership over resources, this can include the things leaders have responsibility for over the human resources, with management, help, support or direction such as in charge of a professional group, technical experts, innovative leaders, archiving expertise, academic management, association management, business management, healthcare management, military management, public administration, spiritual leadership and social networking administrator.
individuals exp
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Humans are unique in their ability to alter their environment with the conscious purpose of increasing what, which acts as a limiting factor on populations in general?
A. carrying capacity
B. density dependent limitation
C. niche
D. containing capacity
Answer:
|
|
scienceQA-10545
|
multiple_choice
|
How long is a seesaw?
|
[
"3 feet",
"3 miles",
"3 inches",
"3 yards"
] |
D
|
The best estimate for the length of a seesaw is 3 yards.
3 inches and 3 feet are too short. 3 miles is too long.
|
Relavent Documents:
Document 0:::
The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests.
Events
There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science.
Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier.
The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version.
It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers.
General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy.
Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 4:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a seesaw?
A. 3 feet
B. 3 miles
C. 3 inches
D. 3 yards
Answer:
|
sciq-9551
|
multiple_choice
|
An electron is accelerated from rest through a potential difference of what?
|
[
"amperes",
"watts",
"joules",
"volts"
] |
D
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 1:::
Electric potential energy is a potential energy (measured in joules) that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects.
The term "electric potential energy" is used to describe the potential energy in systems with time-variant electric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields.
Definition
The electric potential energy of a system of point charges is defined as the work required to assemble this system of charges by bringing them close together, as in the system from an infinite distance. Alternatively, the electric potential energy of any given charge or system of charges is termed as the total work done by an external agent in bringing the charge or the system of charges from infinity to the present configuration without undergoing any acceleration.
The electrostatic potential energy can also be defined from the electric potential as follows:
Units
The SI unit of electric potential energy is joule (named after the English physicist James Prescott Joule). In the CGS system the erg is the unit of energy, being equal to 10−7 Joules. Also electronvolts may be used, 1 eV = 1.602×10−19 Joules.
Electrostatic potential energy of one point charge
One point charge q in the presence of another point charge Q
The electrostatic potential energy, UE, of one point charge q at position r in the presence of a point charge Q, taking an infinite separation between the charges as the reference position, is:
where is the Coulomb constant, r is the distance between the point charges q and Q, and q and Q are the charges (not the absolute values of the charges—i.e., an electron would have a negative value of charge when
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted or ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An electron is accelerated from rest through a potential difference of what?
A. amperes
B. watts
C. joules
D. volts
Answer:
|
|
sciq-468
|
multiple_choice
|
Where are the desmosome found in a cell?
|
[
"neuron",
"epithelium",
"epithelial",
"coating"
] |
B
|
Relavent Documents:
Document 0:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 1:::
In mammals, trichocytes are the specialized epithelial cells from which the highly mechanically resilient tissues hair and nails are formed. They can be identified by the fact that they express "hard", "trichocyte" or "hair" keratin proteins. These are modified keratins containing large amounts of the amino acid cysteine, which facilitates chemical cross-linking of these proteins to form the tough material from which hair and nail is composed. These cells give rise to non-hair non-keratinized IRSC (inner root sheath cell) as well.
See also
List of human cell types derived from the germ layers
List of distinct cell types in the adult human body
Document 2:::
Mesosomes or chondrioids are folded invaginations in the plasma membrane of bacteria that are produced by the chemical fixation techniques used to prepare samples for electron microscopy. Although several functions were proposed for these structures in the 1960s, they were recognized as artifacts by the late 1970s and are no longer considered to be part of the normal structure of bacterial cells. These extensions are in the form of vesicles, tubules and lamellae.
Initial observations
These structures are invaginations of the plasma membrane observed in gram-positive bacteria that have been chemically fixed to prepare them for electron microscopy. They were first observed in 1953 by George B. Chapman and James Hillier, who referred to them as "peripheral bodies." They were termed "mesosomes" by Fitz-James in 1960.
Initially, it was thought that mesosomes might play a role in several cellular processes, such as cell wall formation during cell division, chromosome replication, or as a site for oxidative phosphorylation. The mesosome was thought to increase the surface area of the cell, aiding the cell in cellular respiration. This is analogous to cristae in the mitochondrion in eukaryotic cells, which are finger-like projections and help eukaryotic cells undergo cellular respiration. Mesosomes were also hypothesized to aid in photosynthesis, cell division, DNA replication, and cell compartmentalisation.
Disproof of hypothesis
These models were called into question during the late 1970s when data accumulated suggesting that mesosomes are artifacts formed through damage to the membrane during the process of chemical fixation, and do not occur in cells that have not been chemically fixed. By the mid to late 1980s, with advances in cryofixation and freeze substitution methods for electron microscopy, it was generally concluded that mesosomes do not exist in living cells. However, a few researchers continue to argue that the evidence remains inconclusive, and that mesoso
Document 3:::
The cilium (: cilia; ), is a membrane-bound organelle found on most types of eukaryotic cell. Cilia are absent in bacteria and archaea. The cilium has the shape of a slender threadlike projection that extends from the surface of the much larger cell body. Eukaryotic flagella found on sperm cells and many protozoans have a similar structure to motile cilia that enables swimming through liquids; they are longer than cilia and have a different undulating motion.
There are two major classes of cilia: motile and non-motile cilia, each with a subtype, giving four types in all. A cell will typically have one primary cilium or many motile cilia. The structure of the cilium core called the axoneme determines the cilium class. Most motile cilia have a central pair of single microtubules surrounded by nine pairs of double microtubules called a 9+2 axoneme. Most non-motile cilia have a 9+0 axoneme that lacks the central pair of microtubules. Also lacking are the associated components that enable motility including the outer and inner dynein arms, and radial spokes. Some motile cilia lack the central pair, and some non-motile cilia have the central pair, hence the four types.
Most non-motile cilia are termed primary cilia or sensory cilia and serve solely as sensory organelles. Most vertebrate cell types possess a single non-motile primary cilium, which functions as a cellular antenna. Olfactory neurons possess a great many non-motile cilia. Non-motile cilia that have a central pair of microtubules are the kinocilia present on hair cells.
Motile cilia are found in large numbers on respiratory epithelial cells – around 200 cilia per cell, where they function in mucociliary clearance, and also have mechanosensory and chemosensory functions. Motile cilia on ependymal cells move the cerebrospinal fluid through the ventricular system of the brain. Motile cilia are also present in the oviducts (fallopian tubes) of female (therian) mammals where they function in moving the egg cell
Document 4:::
A dendrite is a branching projection of the cytoplasm of a cell. While the term is most commonly used to refer to the branching projections of neurons, it can also be used to refer to features of other types of cells that, while having a similar appearance, are actually quite distinct structures.
Non-neuronal cells that have dendrites:
Dendritic cells, part of the mammalian immune system
Melanocytes, pigment-producing cells located in the skin
Merkel cells, receptor-cells in the skin associated with the sense of touch
Corneal keratocytes, specialized fibroblasts residing in the stroma.
Cell biology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where are the desmosome found in a cell?
A. neuron
B. epithelium
C. epithelial
D. coating
Answer:
|
|
sciq-10991
|
multiple_choice
|
The heme parts of a hemoglobin molecule bind with what element?
|
[
"oxygen",
"hydrogen",
"carbon",
"nitrogen"
] |
A
|
Relavent Documents:
Document 0:::
Heme C (or haem C) is an important kind of heme.
History
The correct structure of heme C was published, in mid 20th century, by the Swedish biochemist K.-G. Paul. This work confirmed the structure first inferred by the great Swedish biochemist Hugo Theorell. The structure of heme C, based upon NMR and IR experiments of the reduced, Fe(II), form of the heme, was confirmed in 1975. The structure of heme C including the absolute stereochemical configuration about the thioether bonds was first presented for the vertebrate protein, cytochrome c and is now extended to many other heme C containing proteins.
Properties
Heme C differs from heme B in that the two vinyl side chains of heme B are replaced by covalent, thioether linkages to the apoprotein. The two thioether linkages are typically made by cysteine residues of the protein. These linkages do not allow the heme C to easily dissociate from the holoprotein, cytochrome c, compared with the more easily dissociated heme B that may dissociate from the holoprotein, the heme-protein complex, even under mild conditions. This allows a very wide range of cytochrome c structure and function, with myriad c type cytochromes acting primarily as electron carriers. The redox potential for cytochrome c can also be "fine-tuned" by small changes in protein structure and solvent interaction.
The number of heme C units bound to a holoprotein is highly variable. For vertebrate cells one heme C per protein is the rule but for bacteria this number is often 2, 4, 5, 6 or even 16 heme C groups per holoprotein. It is generally agreed the number and arrangement of heme C groups are related and even required for proper holoprotein function. For instance, those proteins containing several heme C groups are involved with multiple electron transfer reactions, particularly important is the 6 electron reduction required to reduce atmospheric nitrogen into two organic ammonia molecules. It is common for the heme C to amino acid ratio to be h
Document 1:::
Heme (American English), or haem (Commonwealth English, both pronounced /hi:m/ ), is a precursor to hemoglobin, which is necessary to bind oxygen in the bloodstream. Heme is biosynthesized in both the bone marrow and the liver.
In biochemical terms, heme is a coordination complex "consisting of an iron ion coordinated to a porphyrin acting as a tetradentate ligand, and to one or two axial ligands." The definition is loose, and many depictions omit the axial ligands. Among the metalloporphyrins deployed by metalloproteins as prosthetic groups, heme is one of the most widely used and defines a family of proteins known as hemoproteins. Hemes are most commonly recognized as components of hemoglobin, the red pigment in blood, but are also found in a number of other biologically important hemoproteins such as myoglobin, cytochromes, catalases, heme peroxidase, and endothelial nitric oxide synthase.
The word haem is derived from Greek haima meaning "blood".
Function
Hemoproteins have diverse biological functions including the transportation of diatomic gases, chemical catalysis, diatomic gas detection, and electron transfer. The heme iron serves as a source or sink of electrons during electron transfer or redox chemistry. In peroxidase reactions, the porphyrin molecule also serves as an electron source, being able to delocalize radical electrons in the conjugated ring. In the transportation or detection of diatomic gases, the gas binds to the heme iron. During the detection of diatomic gases, the binding of the gas ligand to the heme iron induces conformational changes in the surrounding protein. In general, diatomic gases only bind to the reduced heme, as ferrous Fe(II) while most peroxidases cycle between Fe(III) and Fe(IV) and hemeproteins involved in mitochondrial redox, oxidation-reduction, cycle between Fe(II) and Fe(III).
It has been speculated that the original evolutionary function of hemoproteins was electron transfer in primitive sulfur-based photosynthesi
Document 2:::
Mu hemoglobin is a predicted protein encoded in the HBM gene. The mRNA is expressed at moderate levels, but the protein has not been detected by mass spectrometry. The order of genes is: 5' - zeta - pseudozeta - mu - pseudoalpha-1 - alpha-2 - alpha-1 - theta1 - 3'.
Document 3:::
Heme B or haem B (also known as protoheme IX) is the most abundant heme. Hemoglobin and myoglobin are examples of oxygen transport proteins that contain heme B. The peroxidase family of enzymes also contain heme B. The COX-1 and COX-2 enzymes (cyclooxygenase) of recent fame, also contain heme B at one of two active sites.
Generally, heme B is attached to the surrounding protein matrix (known as the apoprotein) through a single coordination bond between the heme iron and an amino-acid side-chain.
Both hemoglobin and myoglobin have a coordination bond to an evolutionarily-conserved histidine, while nitric oxide synthase and cytochrome P450 have a coordination bond to an evolutionarily-conserved cysteine bound to the iron center of heme B.
Since the iron in heme B containing proteins is bound to the four nitrogens of the porphyrin (forming a plane) and a single electron donating atom of the protein, the iron is often in a pentacoordinate state. When oxygen or the toxic carbon monoxide is bound the iron becomes hexacoordinated.
The correct structures of heme B and heme S were first elucidated by German chemist Hans Fischer.
Document 4:::
Heme O (or haem O) differs from the closely related heme A by having a methyl group at ring position 8 instead of the formyl group. The isoprenoid chain at position 2 is the same.
Heme O, found in the bacterium Escherichia coli, functions in a similar manner to heme A in mammalian oxygen reduction.
See also
Heme
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The heme parts of a hemoglobin molecule bind with what element?
A. oxygen
B. hydrogen
C. carbon
D. nitrogen
Answer:
|
|
ai2_arc-262
|
multiple_choice
|
Which of the following best describes the purpose of the chromosomes in the nucleus of a cell?
|
[
"to store the genetic instructions needed to specify traits",
"to release energy by breaking down food molecules",
"to transport nutrients into and out of the cell",
"to protect the cells from microorganisms"
] |
A
|
Relavent Documents:
Document 0:::
Nuclear organization refers to the spatial distribution of chromatin within a cell nucleus. There are many different levels and scales of nuclear organisation. Chromatin is a higher order structure of DNA.
At the smallest scale, DNA is packaged into units called nucleosomes. The quantity and organisation of these nucleosomes can affect the accessibility of local chromatin. This has a knock-on effect on the expression of nearby genes, additionally determining whether or not they can be regulated by transcription factors.
At slightly larger scales, DNA looping can physically bring together DNA elements that would otherwise be separated by large distances. These interactions allow regulatory signals to cross over large genomic distances—for example, from enhancers to promoters.
In contrast, on a large scale, the arrangement of chromosomes can determine their properties. Chromosomes are organised into two compartments labelled A ("active") and B ("inactive"), each with distinct properties. Moreover, entire chromosomes segregate into distinct regions called chromosome territories.
Importance
Each human cell contains around two metres of DNA, which must be tightly folded to fit inside the cell nucleus. However, in order for the cell to function, proteins must be able to access the sequence information contained within the DNA, in spite of its tightly-packed nature. Hence, the cell has a number of mechanisms in place to control how DNA is organized.
Moreover, nuclear organization can play a role in establishing cell identity. Cells within an organism have near identical nucleic acid sequences, but often exhibit different phenotypes. One way in which this individuality occurs is through changes in genome architecture, which can alter the expression of different sets of genes. These alterations can have a downstream effect on cellular functions such as cell cycle facilitation, DNA replication, nuclear transport, and alteration of nuclear structure. Controlled changes in
Document 1:::
The nucleoplasm, also known as karyoplasm, is the type of protoplasm that makes up the cell nucleus, the most prominent organelle of the eukaryotic cell. It is enclosed by the nuclear envelope, also known as the nuclear membrane. The nucleoplasm resembles the cytoplasm of a eukaryotic cell in that it is a gel-like substance found within a membrane, although the nucleoplasm only fills out the space in the nucleus and has its own unique functions. The nucleoplasm suspends structures within the nucleus that are not membrane-bound and is responsible for maintaining the shape of the nucleus. The structures suspended in the nucleoplasm include chromosomes, various proteins, nuclear bodies, the nucleolus, nucleoporins, nucleotides, and nuclear speckles.
The soluble, liquid portion of the nucleoplasm is called the karyolymph nucleosol, or nuclear hyaloplasm.
History
The existence of the nucleus, including the nucleoplasm, was first documented as early as 1682 by the Dutch microscopist Leeuwenhoek and was later described and drawn by Franz Bauer. However, the cell nucleus was not named and described in detail until Robert Brown's presentation to the Linnean Society in 1831.
The nucleoplasm, while described by Bauer and Brown, was not specifically isolated as a separate entity until its naming in 1882 by Polish-German scientist Eduard Strasburger, one of the most famous botanists of the 19th century, and the first person to discover mitosis in plants.
Role
Many important cell functions take place in the nucleus, more specifically in the nucleoplasm. The main function of the nucleoplasm is to provide the proper environment for essential processes that take place in the nucleus, serving as the suspension substance for all organelles inside the nucleus, and storing the structures that are used in these processes. 34% of proteins encoded in the human genome are ones that localize to the nucleoplasm. These proteins take part in RNA transcription and gene regulation in the n
Document 2:::
In biology, the nuclear matrix is the network of fibres found throughout the inside of a cell nucleus after a specific method of chemical extraction. According to some it is somewhat analogous to the cell cytoskeleton. In contrast to the cytoskeleton, however, the nuclear matrix has been proposed to be a dynamic structure. Along with the nuclear lamina, it supposedly aids in organizing the genetic information within the cell.
The exact function of this structure is still disputed, and its very existence has been called into question. Evidence for such a structure was recognised as long ago as 1948, and consequently many proteins associated with the matrix have been discovered. The presence of intra-cellular proteins is common ground, and it is agreed that proteins such as the Scaffold, or Matrix Associated Proteins (SAR or MAR) have some role in the organisation of chromatin in the living cell. There is evidence that the nuclear matrix is involved in regulation of gene expression in Arabidopsis thaliana.
Whenever a similar structure can actually be found in living cells remains a topic of discussion. According to some sources, most, if not all proteins found in nuclear matrix are the aggregates of proteins of structures that can be found in the nucleus of living cells. Such structures are nuclear lamina, which consist of proteins termed lamins which can be also found in the nuclear matrix.
Validity of nuclear matrix
For a long time the question whether a polymer meshwork, a “nuclear matrix” or “nuclear-scaffold” or "NuMat" is an essential component of the in vivo nuclear architecture has remained a matter of debate. While there are arguments that the relative position of chromosome territories (CTs), the equivalent of condensed metaphase chromosomes at interphase, may be maintained due to steric hindrance or electrostatic repulsion forces between the apparently highly structured CT surfaces, this concept has to be reconciled with observations according to which
Document 3:::
Biorientation is the phenomenon whereby microtubules emanating from different microtubule organizing centres (MTOCs) attach to kinetochores of sister chromatids. This results in the sister chromatids moving to opposite poles of the cell during cell division, and thus results in both daughter cells having the same genetic information.
Kinetochores link the chromosomes to the mitotic spindle - doing so relies on intricate interactions between microtubules and kinetochores. It has been shown that, in fission yeast, microtubule attachment can make frequent erroneous attachments early in mitosis, which are then often corrected prior to anaphase onset by a system which uses protein kinase to affect kinetochore microtubules in the absence of astriction between sister chromatids.
Proper biorientation allows correct chromosomal segregation in cell division. Although this process is not well understood, high-resolution imaging of live mouse oocytes has revealed that chromosomes form an intermediate chromosomal configuration, called the prometaphase belt, which occurs prior to biorientation. Kitajima, et al. estimate that about 90% of chromosomes require correction of the kinetochore-microtubule attachments (using Aurora kinase )prior to obtaining correct biorientation. This suggests a possible cause for the elevated frequency of abnormal chromosome counts (aneuploidy) in mammals.
Several methods are postulated by which chromosomes biorient when they are located far from the pole with which they need to connect. One mechanism involves the kinetochore meeting microtubules from the distal pole. Another method described is based on observations that the kinetochore of one pole-oriented chromosome attaches to kinetochore fibers of an already bioriented chromosome. These two mechanisms possibly work in concert - certain chromosomes may biorient via encounters with microtubules from distal poles, which is then followed by kinetochore fibers that speed up biorientation with alrea
Document 4:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following best describes the purpose of the chromosomes in the nucleus of a cell?
A. to store the genetic instructions needed to specify traits
B. to release energy by breaking down food molecules
C. to transport nutrients into and out of the cell
D. to protect the cells from microorganisms
Answer:
|
|
sciq-6778
|
multiple_choice
|
What always has the same elements in the same ratio?
|
[
"mitochondria",
"compound",
"component",
"cell"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What always has the same elements in the same ratio?
A. mitochondria
B. compound
C. component
D. cell
Answer:
|
|
scienceQA-4729
|
multiple_choice
|
What do these two changes have in common?
milk going sour
a copper statue turning green
|
[
"Both are caused by cooling.",
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by heating."
] |
C
|
Step 1: Think about each change.
Milk going sour is a chemical change. The type of matter in the milk slowly changes. The new matter that is formed gives the milk its sour taste.
A copper statue turning green is a chemical change. The copper reacts with oxygen in the air. This reaction forms a different type of matter called copper oxide. The copper oxide is green.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
Document 3:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 4:::
Homogenization or homogenisation is any of several processes used to make a mixture of two mutually non-soluble liquids the same throughout. This is achieved by turning one of the liquids into a state consisting of extremely small particles distributed uniformly throughout the other liquid. A typical example is the homogenization of milk, wherein the milk fat globules are reduced in size and dispersed uniformly through the rest of the milk.
Definition
Homogenization (from "homogeneous;" Greek, homogenes: homos, same + genos, kind) is the process of converting two immiscible liquids (i.e. liquids that are not soluble, in all proportions, one in another) into an emulsion (Mixture of two or more liquids that are generally immiscible). Sometimes two types of homogenization are distinguished: primary homogenization, when the emulsion is created directly from separate liquids; and secondary homogenization, when the emulsion is created by the reduction in size of droplets in an existing emulsion.
Homogenization is achieved by a mechanical device called a homogenizer.
Application
One of the oldest applications of homogenization is in milk processing. It is normally preceded by "standardization" (the mixing of milk from several different herds or dairies to produce a more consistent raw milk prior to processing). The fat in milk normally separates from the water and collects at the top. Homogenization breaks the fat into smaller sizes so it no longer separates, allowing the sale of non-separating milk at any fat specification.
Methods
Milk homogenization is accomplished by mixing large amounts of harvested milk, then forcing the milk at high pressure through small holes. Milk homogenization is an essential tool of the milk food industry to prevent creating various levels of flavor and fat concentration.
Another application of homogenization is in soft drinks like cola products. The reactant mixture is rendered to intense homogenization, to as much as 35,000 psi, so tha
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
milk going sour
a copper statue turning green
A. Both are caused by cooling.
B. Both are only physical changes.
C. Both are chemical changes.
D. Both are caused by heating.
Answer:
|
sciq-6989
|
multiple_choice
|
What is the inner layer of skin called?
|
[
"the epidermis",
"hypodermis",
"connective layer",
"the dermis"
] |
D
|
Relavent Documents:
Document 0:::
In zoology, the epidermis is an epithelium (sheet of cells) that covers the body of a eumetazoan (animal more complex than a sponge). Eumetazoa have a cavity lined with a similar epithelium, the gastrodermis, which forms a boundary with the epidermis at the mouth.
Sponges have no epithelium, and therefore no epidermis or gastrodermis. The epidermis of a more complex invertebrate is just one layer deep, and may be protected by a non-cellular cuticle. The epidermis of a higher vertebrate has many layers, and the outer layers are reinforced with keratin and then die.
Document 1:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 2:::
Cutis is the combined term for the epidermis and the dermis, the two outer layers of the skin. The subcutis is the layer below the cutis. Sweat pores are contained in the cutis, along with other organs, while hair follicles are contained in the subcutis, along with sweat glands and nerves.
Skin anatomy
Document 3:::
Superficial cervical fascia is a thin layer of subcutaneous connective tissue that lies between the dermis of the skin and the deep cervical fascia. It contains the platysma, cutaneous nerves from the cervical plexus, blood vessels, and lymphatic vessels. It also contains a varying amount of fat, which is its distinguishing characteristic. It is considered by some to be a part of the panniculus adiposus, and not true fascia.
Document 4:::
A laminar organization describes the way certain tissues, such as bone membrane, skin, or brain tissues, are arranged in layers.
Types
Embryo
The earliest forms of laminar organization are shown in the diploblastic and triploblastic formation of the germ layers in the embryo. In the first week of human embryogenesis two layers of cells have formed, an external epiblast layer (the primitive ectoderm), and an internal hypoblast layer (primitive endoderm). This gives the early bilaminar disc. In the third week in the stage of gastrulation epiblast cells invaginate to form endoderm, and a third layer of cells known as mesoderm. Cells that remain in the epiblast become ectoderm. This is the trilaminar disc and the epiblast cells have given rise to the three germ layers.
Brain
In the brain a laminar organization is evident in the arrangement of the three meninges, the membranes that cover the brain and spinal cord. These membranes are the dura mater, arachnoid mater, and pia mater. The dura mater has two layers a periosteal layer near to the bone of the skull, and a meningeal layer next to the other meninges.
The cerebral cortex, the outer neural sheet covering the cerebral hemispheres can be described by its laminar organization, due to the arrangement of cortical neurons into six distinct layers.
Eye
The eye in mammals has an extensive laminar organization. There are three main layers – the outer fibrous tunic, the middle uvea, and the inner retina. These layers have sublayers with the retina having ten ranging from the outer choroid to the inner vitreous humor and including the retinal nerve fiber layer.
Skin
The human skin has a dense laminar organization. The outer epidermis has four or five layers.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the inner layer of skin called?
A. the epidermis
B. hypodermis
C. connective layer
D. the dermis
Answer:
|
|
sciq-10478
|
multiple_choice
|
Under which conditions do many bacteria carry out alcohol fermentation?
|
[
"enzymatic",
"anaerobic",
"photosynthesis",
"melting"
] |
B
|
Relavent Documents:
Document 0:::
In food processing, fermentation is the conversion of carbohydrates to alcohol or organic acids using microorganisms—yeasts or bacteria—under anaerobic (oxygen-free) conditions. Fermentation usually implies that the action of microorganisms is desired. The science of fermentation is known as zymology or zymurgy.
The term "fermentation" sometimes refers specifically to the chemical conversion of sugars into ethanol, producing alcoholic drinks such as wine, beer, and cider. However, similar processes take place in the leavening of bread (CO2 produced by yeast activity), and in the preservation of sour foods with the production of lactic acid, such as in sauerkraut and yogurt.
Other widely consumed fermented foods include vinegar, olives, and cheese. More localised foods prepared by fermentation may also be based on beans, grain, vegetables, fruit, honey, dairy products, and fish.
History and prehistory
Brewing and winemaking
Natural fermentation precedes human history. Since ancient times, humans have exploited the fermentation process. The earliest archaeological evidence of fermentation is 13,000-year-old residues of a beer, with the consistency of gruel, found in a cave near Haifa in Israel. Another early alcoholic drink, made from fruit, rice, and honey, dates from 7000 to 6600 BC, in the Neolithic Chinese village of Jiahu, and winemaking dates from ca. 6000 BC, in Georgia, in the Caucasus area. Seven-thousand-year-old jars containing the remains of wine, now on display at the University of Pennsylvania, were excavated in the Zagros Mountains in Iran. There is strong evidence that people were fermenting alcoholic drinks in Babylon ca. 3000 BC, ancient Egypt ca. 3150 BC, pre-Hispanic Mexico ca. 2000 BC, and Sudan ca. 1500 BC.
Discovery of the role of yeast
The French chemist Louis Pasteur founded zymology, when in 1856 he connected yeast to fermentation.
When studying the fermentation of sugar to alcohol by yeast, Pasteur concluded that the fermentation wa
Document 1:::
Zymology, also known as zymurgy, is an applied science that studies the biochemical process of fermentation and its practical uses. Common topics include the selection of fermenting yeast and bacteria species and their use in brewing, wine making, fermenting milk, and the making of other fermented foods.
Fermentation
Fermentation can be simply defined, in this context, as the conversion of sugar molecules into ethanol and carbon dioxide by yeast.
Fermentation practices have led to the discovery of ample microbial and antimicrobial cultures on fermented foods and products.
History
French chemist Louis Pasteur was the first 'zymologist' when in 1857 he connected yeast to fermentation. Pasteur originally defined fermentation as "respiration without air".
Pasteur performed careful research and concluded:
The German Eduard Buchner, winner of the 1907 Nobel Prize in chemistry, later determined that fermentation was actually caused by a yeast secretion, which he termed 'zymase'.
The research efforts undertaken by the Danish Carlsberg scientists greatly accelerated understanding of yeast and brewing. The Carlsberg scientists are generally acknowledged as having jump-started the entire field of molecular biology.
Products
All alcoholic drinks including beer, cider, kombucha, kvass, mead, perry, tibicos, wine, pulque, hard liquors (brandy, rum, vodka, sake, schnapps), and soured by-products including vinegar and alegar
Yeast leavened breads including sourdough, salt-rising bread, and others
Cheese and some dairy products including kefir and yogurt
Chocolate
Dishes including fermented fish, such as garum, surströmming, and Worcestershire sauce
Some vegetables such as kimchi, some types of pickles (most are not fermented though), and sauerkraut
A wide variety of fermented edibles made from soy beans, including fermented bean paste, nattō, tempeh, and soya sauce
Notes
Document 2:::
In biochemistry, fermentation theory refers to the historical study of models of natural fermentation processes, especially alcoholic and lactic acid fermentation. Notable contributors to the theory include Justus Von Liebig and Louis Pasteur, the latter of whom developed a purely microbial basis for the fermentation process based on his experiments. Pasteur's work on fermentation later led to his development of the germ theory of disease, which put the concept of spontaneous generation to rest. Although the fermentation process had been used extensively throughout history prior to the origin of Pasteur's prevailing theories, the underlying biological and chemical processes were not fully understood. In the contemporary, fermentation is used in the production of various alcoholic beverages, foodstuffs, and medications.
Overview of fermentation
Fermentation is the anaerobic metabolic process that converts sugar into acids, gases, or alcohols in oxygen starved environments. Yeast and many other microbes commonly use fermentation to carry out anaerobic respiration necessary for survival. Even the human body carries out fermentation processes from time to time, such as during long-distance running; lactic acid will build up in muscles over the course of long-term exertion. Within the human body, lactic acid is the by-product of ATP-producing fermentation, which produces energy so the body can continue to exercise in situations where oxygen intake cannot be processed fast enough. Although fermentation yields less ATP than aerobic respiration, it can occur at a much higher rate. Fermentation has been used by humans consciously since around 5000 BCE, evidenced by jars recovered in the Iran Zagros Mountains area containing remnants of microbes similar those present in the wine-making process.
History
Prior to Pasteur's research on fermentation, there existed some preliminary competing notions of it. One scientist who had a substantial degree of influence on the theory o
Document 3:::
Ethanol productionZymomonas mobilis degrades sugars to pyruvate using the Entner–Doudoroff pathway. The pyruvate is then fermented to prod
Document 4:::
The Society for Industrial Microbiology and Biotechnology (SIMB) is a nonprofit, international association dedicated to the advancement of microbiological sciences, especially as they apply to industrial products, biotechnology, materials, and processes. SIMB promotes the exchange of scientific information through its meetings and publications, and serves as liaison among the specialized fields of microbiology. SIMB was established in 1949 as the Society for Industrial Microbiology (SIM) by Walter Ezekiel, Charles Thom, and Charles L. Porter.
Governance
The SIMB is governed by a Constitution and Bylaws. The membership of SIMB elects a Board of Directors that consists of a President, President-Elect, Past-President, Secretary, Treasurer and four Directors.
Publications
SIMB has two publications, the Journal of Industrial Microbiology and Biotechnology and SIMB News.
Scientific Meetings
SIMB Annual Meeting
Symposium on Biomaterials, Fuels and Chemicals (SBFC)
The first Symposium on Biotechnology for Fuels and Chemicals was held in 1978 and hosted by Oak Ridge National Laboratory (Oak Ridge, TN). It was the first technical meeting focusing exclusively on the biotechnologically-‐mediated conversion of renewable feedstocks, especially lignocellulosic plant biomass, to fuels and chemicals. This annual meeting soon became large enough to be co-‐hosted by the predecessor of the National Renewable Energy Laboratory (Golden, CO) and the Symposium's location alternated yearly between Tennessee and Colorado. In 2008, SIMB began handling the logistics of the meeting and locations were expanded to include other states, with the Symposium being held in alternate years in the eastern or western United States.
Recent Advances in Fermentation Technology (RAFT)
Industrial Microbiology Meets Microbiome (IMMM)
Natural Products
Although there has been a steady decline in natural product discovery efforts in the pharmaceutical industry over the last three decades, natural pro
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Under which conditions do many bacteria carry out alcohol fermentation?
A. enzymatic
B. anaerobic
C. photosynthesis
D. melting
Answer:
|
|
sciq-10796
|
multiple_choice
|
What holds together the small molecules called nucleotides which make up nucleic acids?
|
[
"curvature bonds",
"covalent bonds",
"permanent bonds",
"dissonance bonds"
] |
B
|
Relavent Documents:
Document 0:::
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
Document 1:::
Experimental approaches of determining the structure of nucleic acids, such as RNA and DNA, can be largely classified into biophysical and biochemical methods. Biophysical methods use the fundamental physical properties of molecules for structure determination, including X-ray crystallography, NMR and cryo-EM. Biochemical methods exploit the chemical properties of nucleic acids using specific reagents and conditions to assay the structure of nucleic acids. Such methods may involve chemical probing with specific reagents, or rely on native or analogue chemistry. Different experimental approaches have unique merits and are suitable for different experimental purposes.
Biophysical methods
X-ray crystallography
X-ray crystallography is not common for nucleic acids alone, since neither DNA nor RNA readily form crystals. This is due to the greater degree of intrinsic disorder and dynamism in nucleic acid structures and the negatively charged (deoxy)ribose-phosphate backbones, which repel each other in close proximity. Therefore, crystallized nucleic acids tend to be complexed with a protein of interest to provide structural order and neutralize the negative charge.
Nuclear magnetic resonance spectroscopy (NMR)
Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. As of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy.
Nucleic acid NMR uses similar techniques as protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY
Document 2:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 3:::
Hachimoji DNA (from Japanese hachimoji, "eight letters") is a synthetic nucleic acid analog that uses four synthetic nucleotides in addition to the four present in the natural nucleic acids, DNA and RNA. This leads to four allowed base pairs: two unnatural base pairs formed by the synthetic nucleobases in addition to the two normal pairs. Hachimoji bases have been demonstrated in both DNA and RNA analogs, using deoxyribose and ribose respectively as the backbone sugar.
Benefits of such a nucleic acid system may include an enhanced ability to store data, as well as insights into what may be possible in the search for extraterrestrial life.
The hachimoji DNA system produced one type of catalytic RNA (ribozyme or aptamer) in vitro.
Description
Natural DNA is a molecule carrying the genetic instructions used in the growth, development, functioning, and reproduction of all known living organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life. DNA is a polynucleotide as it is composed of simpler monomeric units called nucleotides; when double-stranded, the two chains coil around each other to form a double helix.
In natural DNA, each nucleotide is composed of one of four nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound to each other with hydrogen bonds, according to base pairing rules (A with T and C with G), to make double-stranded DNA.
Hachimoji DNA is similar to natural DNA but differs in the number, and type, of nucleobases. Unn
Document 4:::
In molecular biology, a polynucleotide () is a biopolymer composed of nucleotide monomers that are covalently bonded in a chain. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are examples of polynucleotides with distinct biological functions. DNA consists of two chains of polynucleotides, with each chain in the form of a helix (like a spiral staircase).
Sequence
Although DNA and RNA do not generally occur in the same polynucleotide, the four species of nucleotides may occur in any order in the chain. The sequence of DNA or RNA species for a given polynucleotide is the main factor determining its function in a living organism or a scientific experiment.
Polynucleotides in organisms
Polynucleotides occur naturally in all living organisms. The genome of an organism consists of complementary pairs of enormously long polynucleotides wound around each other in the form of a double helix. Polynucleotides have a variety of other roles in organisms.
Polynucleotides in scientific experiments
Polynucleotides are used in biochemical experiments such as polymerase chain reaction (PCR) or DNA sequencing. Polynucleotides are made artificially from oligonucleotides, smaller nucleotide chains with generally fewer than 30 subunits. A polymerase enzyme is used to extend the chain by adding nucleotides according to a pattern specified by the scientist.
Prebiotic condensation of nucleobases with ribose
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleotides were present in the primitive soup. These were the fundamental molecules that combined in series to form RNA. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for re
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What holds together the small molecules called nucleotides which make up nucleic acids?
A. curvature bonds
B. covalent bonds
C. permanent bonds
D. dissonance bonds
Answer:
|
|
sciq-9857
|
multiple_choice
|
Twin studies have been instrumental in demonstrating what type of component in autism?
|
[
"internal",
"natural",
"environmental",
"bacterial"
] |
C
|
Relavent Documents:
Document 0:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 1:::
Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge.
Early history
CBM began in the mid-1970s with research headed by Stan Deno at the University of Minnesota. Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress. This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning.
Increasing importance
Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards.
Key feature
Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most imp
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Researchers have suggested a link between handedness and ability with mathematics. This link has been proposed by Geschwind, Galaburda, Annett, and Kilshaw. The suggested link is that a brain without extreme bias towards locating language in the left hemisphere would have an advantage in mathematical ability.
Body of research
Douglas study
A 1967 study by Douglas found no evidence to correlate mathematical ability with left-handedness or ambidexterity. The study compared the people who came in the top 15% of a mathematics examination with those of moderate mathematical ability, and found that the two groups' handedness preferences were similar. However, it did find that those who came lowest in the test had mixed hand preferences. A study in 1979 by Peterson found a trend towards low rates of left-handedness in science students.
Jones and Bell study
A 1980 study by Jones and Bell also obtained negative results. This study compared the handedness of a group of engineering students with strong mathematics skills against the handedness of a group of psychology students (of varying mathematics skills). In both cases, the distribution of handedness resembled that of the general population.
Annett and Kilshaw study
Annett and Kilshaw themselves support their hypothesis with several examples, including a handedness questionnaire given to undergraduates. Annett observes that studies that depend from voluntary returns of a handedness questionnaire are going to be biased towards left-handedness, and notes that this was a weakness of the study. However, the results were that there were significantly more left-handers amongst male mathematics undergraduates than male non-mathematics undergraduates (21% versus 11%) and significantly more non-right-handers (44% versus 24%), and that there was a similar but smaller left-handedness difference for female undergraduates (11% versus 8%). Annett reports the results of this study as being consistent with the hypothesis, fo
Document 4:::
The empathising–systemising (E–S) theory is a theory on the psychological basis of autism and male–female neurological differences originally put forward by English clinical psychologist Simon Baron-Cohen. It classifies individuals based on abilities in empathic thinking (E) and systematic thinking (S). It measures skills using an Empathy Quotient (EQ) and Systemising Quotient (SQ) and attempts to explain the social and communication symptoms in autism spectrum disorders as deficits and delays in empathy combined with intact or superior systemising.
According to Baron-Cohen, the E–S theory has been tested using the Empathy Quotient (EQ) and Systemising Quotient (SQ), developed by him and colleagues, and generates five different 'brain types' depending on the presence or absence of discrepancies between their scores on E or S. E–S profiles show that the profile E>S is more common in females than in males, and the profile S>E is more common in males than in females. Baron-Cohen and associates assert that E–S theory is a better predictor than gender of who chooses STEM subjects (Science, Technology, Engineering and Mathematics). The E–S theory has been extended into the extreme male brain (EMB) theory of autism and Asperger syndrome, which are associated in the E–S theory with below-average empathy and average or above-average systemising.
Baron-Cohen's studies and theory have been questioned on multiple grounds. The overrepresentation of engineers could depend on a socioeconomic status rather than E-S differences.
History
E–S theory was developed by psychologist Simon Baron-Cohen in 2002, as a reconceptualization of cognitive sex differences in the general population. This was done in an effort to understand why the cognitive difficulties in autism appeared to lie in domains in which he says on average females outperformed males, along with why cognitive strengths in autism appeared to lie in domains in which on average males outperformed females. In the first cha
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Twin studies have been instrumental in demonstrating what type of component in autism?
A. internal
B. natural
C. environmental
D. bacterial
Answer:
|
|
sciq-3783
|
multiple_choice
|
The primary function of insulin is to facilitate the uptake of what into body cells?
|
[
"water",
"sucralose",
"glucose",
"chloride"
] |
C
|
Relavent Documents:
Document 0:::
The insulin transduction pathway is a biochemical pathway by which insulin increases the uptake of glucose into fat and muscle cells and reduces the synthesis of glucose in the liver and hence is involved in maintaining glucose homeostasis. This pathway is also influenced by fed versus fasting states, stress levels, and a variety of other hormones.
When carbohydrates are consumed, digested, and absorbed the pancreas senses the subsequent rise in blood glucose concentration and releases insulin to promote uptake of glucose from the bloodstream. When insulin binds to the insulin receptor, it leads to a cascade of cellular processes that promote the usage or, in some cases, the storage of glucose in the cell. The effects of insulin vary depending on the tissue involved, e.g., insulin is most important in the uptake of glucose by muscle and adipose tissue.
This insulin signal transduction pathway is composed of trigger mechanisms (e.g., autophosphorylation mechanisms) that serve as signals throughout the cell. There is also a counter mechanism in the body to stop the secretion of insulin beyond a certain limit. Namely, those counter-regulatory mechanisms are glucagon and epinephrine. The process of the regulation of blood glucose (also known as glucose homeostasis) also exhibits oscillatory behavior.
On a pathological basis, this topic is crucial to understanding certain disorders in the body such as diabetes, hyperglycemia and hypoglycemia.
Transduction pathway
The functioning of a signal transduction pathway is based on extra-cellular signaling that in turn creates a response that causes other subsequent responses, hence creating a chain reaction, or cascade. During the course of signaling, the cell uses each response for accomplishing some kind of a purpose along the way. Insulin secretion mechanism is a common example of signal transduction pathway mechanism.
Insulin is produced by the pancreas in a region called Islets of Langerhans. In the islets of Langerha
Document 1:::
The following outline is provided as an overview of and topical guide to biochemistry:
Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes.
Applications of biochemistry
Testing
Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans.
Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates.
Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population.
Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida.
PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation.
Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin.
Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old
Document 2:::
The insulin concentration in blood increases after meals and gradually returns to basal levels during the next 1–2 hours. However, the basal insulin level is not stable. It oscillates with a regular period of 3-6 min. After a meal the amplitude of these oscillations increases but the periodicity remains constant. The oscillations are believed to be important for insulin sensitivity by preventing downregulation of insulin receptors in target cells. Such downregulation underlies insulin resistance, which is common in type 2 diabetes. It would therefore be advantageous to administer insulin to diabetic patients in a manner mimicking the natural oscillations. The insulin oscillations are generated by pulsatile release of the hormone from the pancreas. Insulin originates from beta cells located in the islets of Langerhans. Since each islet contains up to 2000 beta cells and there are one million islets in the pancreas it is apparent that pulsatile secretion requires sophisticated synchronization both within and among the islets of Langerhans.
Mechanism
Pulsatile insulin secretion from individual beta cells is driven by oscillation of the calcium concentration in the cells. In beta cells lacking contact, the periodicity of these oscillations is rather variable (2-10 min). However, within an islet of Langerhans the oscillations become synchronized by electrical coupling between closely located beta cells that are connected by gap junctions, and the periodicity is more uniform (3-6 min).
Pulsatile insulin release from the entire pancreas requires that secretion is synchronized between 1 million islets within a 25 cm long organ. Much like the cardiac pacemaker, the pancreas is connected to cranial nerve 10, and others, but the oscillations are accomplished by intrapancreatic neurons and do not require neural input from the brain. It is not entirely clear which neural factors account for this synchronization but ATP as well as the gasses NO and CO may be involved. The effe
Document 3:::
Neutral Protamine Hagedorn (NPH) insulin, also known as isophane insulin, is an intermediate-acting insulin given to help control blood sugar levels in people with diabetes. It is used by injection under the skin once to twice a day. Onset of effects is typically in 90 minutes and they last for 24 hours. Versions are available that come premixed with a short-acting insulin, such as regular insulin.
The common side effect is low blood sugar. Other side effects may include pain or skin changes at the sites of injection, low blood potassium, and allergic reactions. Use during pregnancy is relatively safe for the fetus. NPH insulin is made by mixing regular insulin and protamine in exact proportions with zinc and phenol such that a neutral-pH is maintained and crystals form. There are human and pig insulin based versions.
Protamine insulin was first created in 1936 and NPH insulin in 1946. It is on the World Health Organization's List of Essential Medicines. NPH is an abbreviation for "neutral protamine Hagedorn". In 2020, insulin isophane was the 221st most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2020, the combination of human insulin with insulin isophane was the 246th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
NPH insulin is cloudy and has an onset of 1–3 hours. Its peak is 6–8 hours and its duration is up to 24 hours.
It has an intermediate duration of action, meaning longer than that of regular and rapid-acting insulin, and shorter than long acting insulins (ultralente, glargine or detemir). A recent Cochrane systematic review compared the effects of NPH insulin to other insulin analogues (insulin detemir, insulin glargine, insulin degludec) in both children and adults with Type 1 diabetes. Insulin detemir appeared provide a lower risk of severe hyperglycemia compared to NPH insulin, however this finding was inconsistent across included stu
Document 4:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The primary function of insulin is to facilitate the uptake of what into body cells?
A. water
B. sucralose
C. glucose
D. chloride
Answer:
|
|
sciq-4473
|
multiple_choice
|
The waves on the strings of musical instruments are transverse—so are electromagnetic waves, such as visible light. sound waves in air and water are this?
|
[
"horizontal",
"symmetrical",
"longitudinal",
"hydroelectric"
] |
C
|
Relavent Documents:
Document 0:::
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean
Document 1:::
In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave.
A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation.
Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves.
Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or "
Document 2:::
This is a list of wave topics.
0–9
21 cm line
A
Abbe prism
Absorption spectroscopy
Absorption spectrum
Absorption wavemeter
Acoustic wave
Acoustic wave equation
Acoustics
Acousto-optic effect
Acousto-optic modulator
Acousto-optics
Airy disc
Airy wave theory
Alfvén wave
Alpha waves
Amphidromic point
Amplitude
Amplitude modulation
Animal echolocation
Antarctic Circumpolar Wave
Antiphase
Aquamarine Power
Arrayed waveguide grating
Artificial wave
Atmospheric diffraction
Atmospheric wave
Atmospheric waveguide
Atom laser
Atomic clock
Atomic mirror
Audience wave
Autowave
Averaged Lagrangian
B
Babinet's principle
Backward wave oscillator
Bandwidth-limited pulse
beat
Berry phase
Bessel beam
Beta wave
Black hole
Blazar
Bloch's theorem
Blueshift
Boussinesq approximation (water waves)
Bow wave
Bragg diffraction
Bragg's law
Breaking wave
Bremsstrahlung, Electromagnetic radiation
Brillouin scattering
Bullet bow shockwave
Burgers' equation
Business cycle
C
Capillary wave
Carrier wave
Cherenkov radiation
Chirp
Ernst Chladni
Circular polarization
Clapotis
Closed waveguide
Cnoidal wave
Coherence (physics)
Coherence length
Coherence time
Cold wave
Collimated light
Collimator
Compton effect
Comparison of analog and digital recording
Computation of radiowave attenuation in the atmosphere
Continuous phase modulation
Continuous wave
Convective heat transfer
Coriolis frequency
Coronal mass ejection
Cosmic microwave background radiation
Coulomb wave function
Cutoff frequency
Cutoff wavelength
Cymatics
D
Damped wave
Decollimation
Delta wave
Dielectric waveguide
Diffraction
Direction finding
Dispersion (optics)
Dispersion (water waves)
Dispersion relation
Dominant wavelength
Doppler effect
Doppler radar
Douglas Sea Scale
Draupner wave
Droplet-shaped wave
Duhamel's principle
E
E-skip
Earthquake
Echo (phenomenon)
Echo sounding
Echolocation (animal)
Echolocation (human)
Eddy (fluid dynamics)
Edge wave
Eikonal equation
Ekman layer
Ekman spiral
Ekman transport
El Niño–Southern Oscillation
El
Document 3:::
A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves.
Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law.
There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining.
The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances.
Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape.
Uses
The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves g
Document 4:::
Structural acoustics is the study of the mechanical waves in structures and how they interact with and radiate into adjacent media. The field of structural acoustics is often referred to as vibroacoustics in Europe and Asia. People that work in the field of structural acoustics are known as structural acousticians. The field of structural acoustics can be closely related to a number of other fields of acoustics including noise, transduction, underwater acoustics, and physical acoustics.
Vibrations in structures
Compressional and shear waves (isotropic, homogeneous material)
Compressional waves (often referred to as longitudinal waves) expand and contract in the same direction (or opposite) as the wave motion. The wave equation dictates the motion of the wave in the x direction.
where is the displacement and is the longitudinal wave speed. This has the same form as the acoustic wave equation in one-dimension. is determined by properties (bulk modulus and density ) of the structure according to
When two dimensions of the structure are small with respect to wavelength (commonly called a beam), the wave speed is dictated by Youngs modulus instead of the and are consequently slower than in infinite media.
Shear waves occur due to the shear stiffness and follows a similar equation, but with the displacement occurring in the transverse direction, perpendicular to the wave motion.
The shear wave speed is governed by the shear modulus which is less than and , making shear waves slower than longitudinal waves.
Bending waves in beams and plates
Most sound radiation is caused by bending (or flexural) waves, that deform the structure transversely as they propagate. Bending waves are more complicated than compressional or shear waves and depend on material properties as well as geometric properties. They are also dispersive since different frequencies travel at different speeds.
Modeling vibrations
Finite element analysis can be used to predict the vibrat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The waves on the strings of musical instruments are transverse—so are electromagnetic waves, such as visible light. sound waves in air and water are this?
A. horizontal
B. symmetrical
C. longitudinal
D. hydroelectric
Answer:
|
|
sciq-5370
|
multiple_choice
|
What disease refers to the dangerous buildup of fatty materials in blood vessels?
|
[
"arthritis",
"atherosclerosis",
"fibrosis",
"gout"
] |
B
|
Relavent Documents:
Document 0:::
Atherosclerosis is a pattern of the disease arteriosclerosis, characterized by development of abnormalities called lesions in walls of arteries. These lesions may lead to narrowing of the arteries' walls due to buildup of atheromatous plaques. At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age. In severe cases, it can result in coronary artery disease, stroke, peripheral artery disease, or kidney disorders, depending on which body parts(s) the affected arteries are located in the body.
The exact cause of atherosclerosis is unknown and is proposed to be multifactorial. Risk factors include abnormal cholesterol levels, elevated levels of inflammatory biomarkers, high blood pressure, diabetes, smoking (both active and passive smoking), obesity, genetic factors, family history, lifestyle habits, and an unhealthy diet. Plaque is made up of fat, cholesterol, calcium, and other substances found in the blood. The narrowing of arteries limits the flow of oxygen-rich blood to parts of the body. Diagnosis is based upon a physical exam, electrocardiogram, and exercise stress test, among others.
Prevention is generally by eating a healthy diet, exercising, not smoking, and maintaining a normal weight. Treatment of established disease may include medications to lower cholesterol such as statins, blood pressure medication, or medications that decrease clotting, such as aspirin. A number of procedures may also be carried out such as percutaneous coronary intervention, coronary artery bypass graft, or carotid endarterectomy.
Atherosclerosis generally starts when a person is young and worsens with age. Almost all people are affected to some degree by the age of 65. It is the number one cause of death and disability in developed countries. Though it was first described in 1575, there is evidence that the condition occurred in people more than 5,000 years ago.
Signs and symptoms
Atherosclerosis is asymptomatic for decades because
Document 1:::
Focal fatty liver (FFL) is localised or patchy process of lipid accumulation in the liver. It is likely to have different pathogenesis than non-alcoholic steatohepatitis which is a diffuse process. FFL may result from altered venous flow to liver, tissue hypoxia and malabsorption of lipoproteins. The condition has been increasingly recognised as sensitivity of abdominal imaging studies continues to improve. A fine needle biopsy is often performed to differentiate it from malignancy.
Document 2:::
This is a list of pathology mnemonics, categorized and alphabetized. For mnemonics in other medical specialities, see this list of medical mnemonics.
Acute intermittent porphyria: signs and symptoms
5 Ps:
Pain in the abdomen
Polyneuropathy
Psychological abnormalities
Pink urine
Precipitated by drugs (including barbiturates, oral contraceptives, and sulfa drugs)
Acute ischemia: signs [especially limbs]
6 P's:
Pain
Pallor
Pulselessness
Paralysis
Paraesthesia
Perishingly cold
Anemia (normocytic): causes
:
Acute blood loss
Bone marrow failure
Chronic disease
Destruction (hemolysis)
Anemia causes (simplified)
ANEMIA:
Anemia of chronic disease
No folate or B12
Ethanol
Marrow failure & hemaglobinopathies
Iron deficient
Acute & chronic blood loss
Atherosclerosis risk factors
"You're a SAD BET with these risk factors":
Sex: male
Age: middle-aged, elderly
Diabetes mellitus
BP high: hypertension
Elevated cholesterol
Tobacco
Carcinoid syndrome: components
CARCinoid:
Cutaneous flushing
Asthmatic wheezing
Right sided valvular heart lesions
Cramping and diarrhea
Cushing syndrome
CUSHING:
Central obesity/ Cervical fat pads/ Collagen fiber weakness/ Comedones (acne)
Urinary free corisol and glucose increase
Striae/ Suppressed immunity
Hypercortisolism/ Hypertension/ Hyperglycemia/ Hirsutism
Iatrogenic (Increased administration of corticosteroids)
Noniatrogenic (Neoplasms)
Glucose intolerance/Growth retardation
Diabetic ketoacidosis: I vs. II
ketONEbodies are seen in type ONEdiabetes.
Gallstones: risk factors
5 F's:
Fat
Female
Fair (gallstones more common in Caucasians)
Fertile (premenopausal- increased estrogen is thought to increase cholesterol levels in bile and decrease gallbladder contractions)
Forty or above (age)
Hepatomegaly: 3 common causes, 3 rarer causes
Common are 3 C's:
Cirrhosis
Carcinoma
Cardiac failure
Rarer are 3 C's:
Cholestasis
Cysts
Cellular infiltration
Hyperkalemia (signs and symptoms)
MURDER
Mus
Document 3:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 4:::
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond.
A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation.
Composition of common fats
In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography.
The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component.
Chemistry and nutrition
Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective.
Examples
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What disease refers to the dangerous buildup of fatty materials in blood vessels?
A. arthritis
B. atherosclerosis
C. fibrosis
D. gout
Answer:
|
|
sciq-10609
|
multiple_choice
|
The movement of bone away from the midline of the body is called what?
|
[
"spring",
"continuation",
"extension",
"flexion"
] |
C
|
Relavent Documents:
Document 0:::
Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques.
Basics
Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy.
The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor
Document 1:::
The Mechanostat is a term describing the way in which mechanical loading influences bone structure by changing the mass (amount of bone) and architecture (its arrangement) to provide a structure that resists habitual loads with an economical amount of material. As changes in the skeleton are accomplished by the processes of formation (bone growth) and resorption (bone loss), the mechanostat models the effect of influences on the skeleton by those processes, through their effector cells, osteocytes, osteoblasts, and osteoclasts. The term was invented by Harold Frost: an orthopaedic surgeon and researcher described extensively in articles referring to Frost and Webster Jee's Utah Paradigm of Skeletal Physiology in the 1960s. The Mechanostat is often defined as a practical description of Wolff's law described by Julius Wolff (1836–1902), but this is not completely accurate. Wolff wrote his treatises on bone after images of bone sections were described by Culmann and von Meyer, who suggested that the arrangement of the struts (trabeculae) at the ends of the bones were aligned with the stresses experienced by the bone. It has since been established that the static methods used for those calculations of lines of stress were inappropriate for work on what were, in effect, curved beams, a finding described by Lance Lanyon, a leading researcher in the area as "a triumph of a good idea over mathematics." While Wolff pulled together the work of Culmann and von Meyer, it was the French scientist Roux, who first used the term "functional adaptation" to describe the way that the skeleton optimized itself for its function, though Wolff is credited by many for that.
According to the Mechanostat, bone growth and bone loss is stimulated by the local, mechanical, elastic deformation of bone. The reason for the elastic deformation of bone is the peak forces caused by muscles (e.g. measurable using mechanography). The adaptation (feed-back control loop) of bone according to the maximu
Document 2:::
In anatomy, a process () is a projection or outgrowth of tissue from a larger body. For instance, in a vertebra, a process may serve for muscle attachment and leverage (as in the case of the transverse and spinous processes), or to fit (forming a synovial joint), with another vertebra (as in the case of the articular processes). The word is also used at the microanatomic level, where cells can have processes such as cilia or pedicels. Depending on the tissue, processes may also be called by other terms, such as apophysis, tubercle, or protuberance.
Examples
Examples of processes include:
The many processes of the human skull:
The mastoid and styloid processes of the temporal bone
The zygomatic process of the temporal bone
The zygomatic process of the frontal bone
The orbital, temporal, lateral, frontal, and maxillary processes of the zygomatic bone
The anterior, middle, and posterior clinoid processes and the petrosal process of the sphenoid bone
The uncinate process of the ethmoid bone
The jugular process of the occipital bone
The alveolar, frontal, zygomatic, and palatine processes of the maxilla
The ethmoidal and maxillary processes of the inferior nasal concha
The pyramidal, orbital, and sphenoidal processes of the palatine bone
The coronoid and condyloid processes of the mandible
The xiphoid process at the end of the sternum
The acromion and coracoid processes of the scapula
The coronoid process of the ulna
The radial and ulnar styloid processes
The uncinate processes of ribs found in birds and reptiles
The uncinate process of the pancreas
The spinous, articular, transverse, accessory, uncinate, and mammillary processes of the vertebrae
The trochlear process of the heel
The appendix, which is sometimes called the "vermiform process", notably in Gray's Anatomy
The olecranon process of the ulna
See also
Eminence
Tubercle
Appendage
Pedicle of vertebral arch
Notes
Document 3:::
The proximodistal trend is the tendency for more general functions of limbs to develop before more specific or fine motor skills. It comes from the Latin words proxim- which means "close" and "-dis-" meaning "away from", because the trend essentially describes a path from the center outward.
Document 4:::
Kinanthropometry is defined as the study of human size, shape, proportion, composition, maturation, and gross function, in order to understand growth, exercise, performance, and nutrition.
It is a scientific discipline that is concerned with the measurement of individuals in a variety of morphological perspectives, its application to movement and those factors which influence movement, including: components of body build, body measurements, proportions, composition, shape and maturation; motor abilities and cardiorespiratory capacities; physical activity including recreational activity as well as highly specialized sports performance. The predominant focus is upon obtaining detailed measurements upon the body composition of a given person.
Kinanthropometry is the interface between human anatomy and movement. It is the application of a series of measurements made on the body and from these we can use the data that we gather directly or perform calculations using the data to produce various indices and body composition predictions and to measure and describe physique.
Kinanthropometry is an unknown word for many people except those inside the field of sport science. Describing the etymology of the word kinanthropometry can help illustrate simply what you are going to talk about. However, if you have to say just a few sentences about the general scope of it, some problems will arise immediately. Is it a science? Why are its central definitions so ambiguous and various? For what really matter the kinanthropometric assessment. And so on.
Defining a particular aim for kinanthropometry is central for its full understanding. Ross et al. (1972) said “K is a scientific discipline that studies the body size, the proportionality, the performance of movement, the body composition and principal functions of the body. This so well cited definition is not completely exact as the last four words show. What are the kinanthropometric methods that truly tell us something about prin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The movement of bone away from the midline of the body is called what?
A. spring
B. continuation
C. extension
D. flexion
Answer:
|
|
sciq-10990
|
multiple_choice
|
What type of particles can a beta particle be?
|
[
"electron or positron",
"cytoplasm or positron",
"anode or positron",
"diode or positron"
] |
A
|
Relavent Documents:
Document 0:::
This is a list of known and hypothesized particles.
Standard Model elementary particles
Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally.
Fermions
Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei.
Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons.
Quarks
Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except th
Document 1:::
This is a timeline of subatomic particle discoveries, including all particles thus far discovered which appear to be elementary (that is, indivisible) given the best available evidence. It also includes the discovery of composite particles and antiparticles that were of particular historical importance.
More specifically, the inclusion criteria are:
Elementary particles from the Standard Model of particle physics that have so far been observed. The Standard Model is the most comprehensive existing model of particle behavior. All Standard Model particles including the Higgs boson have been verified, and all other observed particles are combinations of two or more Standard Model particles.
Antiparticles which were historically important to the development of particle physics, specifically the positron and antiproton. The discovery of these particles required very different experimental methods from that of their ordinary matter counterparts, and provided evidence that all particles had antiparticles—an idea that is fundamental to quantum field theory, the modern mathematical framework for particle physics. In the case of most subsequent particle discoveries, the particle and its anti-particle were discovered essentially simultaneously.
Composite particles which were the first particle discovered containing a particular elementary constituent, or whose discovery was critical to the understanding of particle physics.
See also
List of baryons
List of mesons
List of particles
Document 2:::
In particle physics, the term particle zoo is used colloquially to describe the relatively extensive list of known subatomic particles by comparison to the variety of species in a zoo.
In the history of particle physics, the topic of particles was considered to be particularly confusing in the late 1960s. Before the discovery of quarks, hundreds of strongly interacting particles (hadrons) were known and believed to be distinct elementary particles. It was later discovered that they were not elementary particles, but rather composites of quarks. The set of particles believed today to be elementary is known as the Standard Model and includes quarks, bosons and leptons.
The term "subnuclear zoo" was coined or popularized by Robert Oppenheimer in 1956 at the VI Rochester International Conference on High Energy Physics.
See also
Eightfold way (physics)
List of mesons
List of baryons
List of particles
Document 3:::
The neutrinoless double beta decay (0νββ) is a commonly proposed and experimentally pursued theoretical radioactive decay process that would prove a Majorana nature of the neutrino particle. To this day, it has not been found.
The discovery of the neutrinoless double beta decay could shed light on the absolute neutrino masses and on their mass hierarchy (Neutrino mass). It would mean the first ever signal of the violation of total lepton number conservation. A Majorana nature of neutrinos would confirm that the neutrino is its own antiparticle.
To search for neutrinoless double beta decay, there are currently a number of experiments underway, with several future experiments for increased sensitivity proposed as well.
Historical development of the theoretical discussion
In 1939, Wendell H. Furry proposed the idea of the Majorana nature of the neutrino, which was associated with beta decays. Furry stated the transition probability to even be higher for the neutrinoless double beta decay. It was the first idea proposed to search for the violation of lepton number conservation. It has, since then, drawn attention to it for being useful to study the nature of neutrinos (see quote).
The Italian physicist Ettore Majorana first introduced the concept of a particle being its own antiparticle. Particles' nature was subsequently named after him as Majorana particles. The neutrinoless double beta decay is one method to search for the possible Majorana nature of neutrinos.
Physical relevance
Conventional double beta decay
Neutrinos are conventionally produced in weak decays. Weak beta decays normally produce one electron (or positron), emit an antineutrino (or neutrino) and increase the nucleus' proton number by one. The nucleus' mass (i.e. binding energy) is then lower and thus more favorable. There exist a number of elements that can decay into a nucleus of lower mass, but they cannot emit one electron only because the resulting nucleus is kinematically (that is, in
Document 4:::
In physics, a neutral particle is a particle without an electric charge, such as a neutron.
The term neutral particles should not be confused with truly neutral particles, the subclass of neutral particles that are also identical to their own antiparticles.
Stable or long-lived neutral particles
Long-lived neutral particles provide a challenge in the construction of particle detectors, because they do not interact electromagnetically, except possibly through their magnetic moments. This means that they do not leave tracks of ionized particles or curve in magnetic fields. Examples of such particles include photons, neutrons, and neutrinos.
Other neutral particles
Other neutral particles are very short-lived and decay before they could be detected even if they were charged. They have been observed only indirectly. They include:
Z bosons
Dozens of heavy neutral hadrons:
Neutral mesons such as the and
The neutral Delta baryon (), and other neutral baryons, such as the and
See also
Neutral particle oscillation
Truly neutral particle
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of particles can a beta particle be?
A. electron or positron
B. cytoplasm or positron
C. anode or positron
D. diode or positron
Answer:
|
|
scienceQA-6508
|
multiple_choice
|
How long is an adult alligator?
|
[
"12 inches",
"12 yards",
"12 miles",
"12 feet"
] |
D
|
The best estimate for the length of an adult alligator is 12 feet.
12 inches is too short. 12 yards and 12 miles are too long.
|
Relavent Documents:
Document 0:::
Abronia bogerti, known by the common name Bogert's arboreal alligator lizard, is a species of lizard in the family Anguidae. The species is endemic to Mexico.
Etymology
The specific name, bogerti, is in honor of American herpetologist Charles Mitchill Bogert.
Geographic range
A. bogerti is indigenous to eastern Oaxaca, Mexico. A single specimen, the holotype, of A. bogerti was collected in 1954, and it was not seen again until 2000, at which time a second specimen was photographed. The type locality is "north of Niltepec, between Cerro Atravesado and Sierra Madre, Oaxaca".
Behavior
A. bogerti is largely arboreal.
Reproduction
A. bogerti is viviparous.
Conservation status
Because the species A. bogerti was collected in the canopy of the forest, it is believed that deforestation and ongoing crop and livestock farming pose the largest threats to its survival. Mexican law protects the lizard.
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
This is a list of the fastest animals in the world, by types of animal.
Fastest organism
The peregrine falcon is the fastest bird, and the fastest member of the animal kingdom, with a diving speed of over . The fastest land animal is the cheetah. Among the fastest animals in the sea is the black marlin, with uncertain and conflicting reports of recorded speeds.
When drawing comparisons between different classes of animals, an alternative unit is sometimes used for organisms: body length per second. On this basis the 'fastest' organism on earth, relative to its body length, is the Southern Californian mite, Paratarsotomus macropalpis, which has a speed of 322 body lengths per second. The equivalent speed for a human, running as fast as this mite, would be , or approximately Mach 1.7. The speed of the P. macropalpis is far in excess of the previous record holder, the Australian tiger beetle Cicindela eburneola, which is the fastest insect in the world relative to body size, with a recorded speed of , or 171 body lengths per second. The cheetah, the fastest land mammal, scores at only 16 body lengths per second, while Anna's hummingbird has the highest known length-specific velocity attained by any vertebrate.
Invertebrates
Fish
Due to physical constraints, fish may be incapable of exceeding swim speeds of 36 km/h (22 mph). The larger reported figures below are therefore highly questionable:
Amphibians
Reptiles
Birds
Mammals
See also
Speed records
Notes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is an adult alligator?
A. 12 inches
B. 12 yards
C. 12 miles
D. 12 feet
Answer:
|
sciq-2295
|
multiple_choice
|
What type of behavior is frogs croaking or deer clashing antlers an example of?
|
[
"courtship",
"mating",
"learned behavior",
"instincts"
] |
A
|
Relavent Documents:
Document 0:::
Molly R. Morris is an American behavioral ecologist who has worked with treefrogs and swordtail fishes in the areas of alternative reproductive tactics and sexual selection.
Morris received a Bachelor of Arts from Earlham College and a PhD from Indiana University. As a National Science Foundation postdoctoral fellow at the University of Texas at Austin, her work with Mike Ryan demonstrated equal fitnesses between alternative reproductive tactics in a species of swordtail fish. She joined the faculty at Ohio University in 1997, where she is now a professor in the Department of Biological Sciences. She is also the Associate Editor for the journal Behavior. Her publication credits include multiple papers on Animal behavior and Ecology. Her current research relates to diabetes, as well as behavioral ecology, using the swordtail fish Xiphophorus as a model organism.
Personal life
Morris is married to Kevin de Queiroz, an evolutionary biologist at the Smithsonian Institution's National Museum of Natural History.
Selected works
Document 1:::
An ethogram is a catalogue or inventory of behaviours or actions exhibited by an animal used in ethology.
The behaviours in an ethogram are usually defined to be mutually exclusive and objective, avoiding subjectivity and functional inference as to their possible purpose. For example, a species may use a putative threat display, which in the ethogram is given a descriptive name such as "head forward" or "chest-beating display", and not "head forward threat" or "chest-beating threat". This degree of objectivity is required because what looks like "courtship" might have a completely different function, and in addition, the same motor patterns in different species can have very different functions (e.g. tail wagging in cats and dogs). Objectivity and clarity in the definitions of behaviours also improve inter-observer reliability.
Often, ethograms are hierarchical in presentation. The defined behaviours are recorded under broader categories of behaviour which may allow functional inference such that "head forward" is recorded under "Aggression". In ethograms of social behaviour, the ethogram may also indicate the "Giver" and "Receiver" of activities.
Sometimes, the definition of a behaviour in an ethogram may have arbitrary components. For example, "Stereotyped licking" might be defined as "licking the bars of the cage more than 5 times in 30 seconds". The definition may be arguable, but if it is stated clearly, it fulfils the requirements of scientific repeatability and clarity of reporting and data recording.
Some ethograms are given in pictorial form and not only catalogue the behaviours but indicate the frequency of their occurrence and the probability that one behaviour follows another. This probability can be indicated numerically or by the thickness of an arrow connecting the two behaviours. Sometimes the proportion of time that each behaviour occupies can be represented in a pie chart or bar chart
Animal welfare science
Ethograms are used extens
Document 2:::
Frogs and toads produce a rich variety of sounds, calls, and songs during their courtship and mating rituals. The callers, usually males, make stereotyped sounds in order to advertise their location, their mating readiness and their willingness to defend their territory; listeners respond to the calls by return calling, by approach, and by going silent. These responses have been shown to be important for species recognition, mate assessment, and localization. Beginning with the pioneering experiments of Robert Capranica in the 1930s using playback techniques with normal and synthetic calls, behavioral biologists and neurobiologists have teamed up to use frogs and toads as a model system for understanding the auditory function and evolution. It is now considered an important example of the neural basis of animal behavior, because of the simplicity of the sounds, the relative ease with which neurophysiological recordings can be made from the auditory nerve, and the reliability of localization behavior. Acoustic communication is essential for the frog's survival in both territorial defense and in localization and attraction of mates. Sounds from frogs travel through the air, through water, and through the substrate. The neural basis of communication and audition gives insights into the science of sound applied to human communication.
Sound communication
Behavioral ecology
Frogs are more often heard than seen, and other frogs (and researchers) rely on their calls to identify them. Depending on the region that the frog lives in, certain times of the year are better for breeding than others, and frogs may live away from the best breeding grounds when it is not the species’ mating season. During the breeding season, they congregate to the best breeding site and compete for call time and recognition. Species that have a narrow mating season due to ponds that dry up have the most vigorous calls.
Calling strategy
Male-male competition
In many frog species only males call.
Document 3:::
The Tinbergen Lecture is an academic prize lecture awarded by the Association for the Study of Animal Behaviour (ASAB).
Lecturers
1974 W.H. Thorpe
1975 G.P. Baerends
1976 J. Maynard Smith
1977 F. Huber
1978 R.A. Hinde
1979 J. Bowlby
1980 W.D. Hamilton
1981 S.J. Gould
1982 H. Kummer
1983 Jörg-Peter Ewert
1984 Frank A. Beach
1985 Peter Marler
1986 Jürgen Aschoff
1987 Aubrey Manning
1988 Stephen T. Emlen
1989 P.P.G. Bateson
1990 J.D. Delius
1991 John R. Krebs
1992 E. Curio
1993 Linda Partridge
1994 Fernando Nottebohm
1995 G.A. Parker
1996 Serge Daan
1997 N.B. Davies
1998 Michael Land
1999 Bert Hölldobler
2000 Richard Dawkins
2001 Felicity Huntingford
2002 Marian Dawkins
2003 Tim Clutton-Brock
2004 Tim Birkhead
2005 P.K. McGregor
2006 Pat Monaghan
2007 M. Kirkpatrick
2008 Peter Slater
2009
2010 Laurent Keller
2011 Cancelled
2012 A Cockburn
2013 Marlene Zuk
2014 Innes Cuthill
2015 Nina Wedell
2016 Alex Kacelnik
2017 Christine Nicol
2018 Bart Kempenaers
2019 Rebecca Kilner
2020 Lars Chittka
Document 4:::
"Fixed action pattern" is an ethological term describing an instinctive behavioral sequence that is highly stereotyped and species-characteristic. Fixed action patterns are said to be produced by the innate releasing mechanism, a "hard-wired" neural network, in response to a sign/key stimulus or releaser. Once released, a fixed action pattern runs to completion.
This term is often associated with Konrad Lorenz, who is the founder of the concept. Lorenz identified six characteristics of fixed action patterns. These characteristics state that fixed action patterns are stereotyped, complex, species-characteristic, released, triggered, and independent of experience.
Fixed action patterns have been observed in many species, but most notably in fish and birds. Classic studies by Konrad Lorenz and Niko Tinbergen involve male stickleback mating behavior and greylag goose egg-retrieval behavior.
Fixed action patterns have been shown to be evolutionarily advantageous, as they increase both fitness and speed. However, as a result of their predictability, they may also be used as a means of exploitation. An example of this exploitation would be brood parasitism.
There are four exceptions to fixed action pattern rules: reduced response threshold, vacuum activity, displacement behavior, and graded response.
Characteristics
There are 6 characteristics of fixed action patterns. Fixed action patterns are said to be stereotyped, complex, species-characteristic, released, triggered, and independent of experience.
Stereotyped: Fixed action patterns occur in rigid, predictable, and highly-structured sequences.
Complex: Fixed action patterns are not a simple reflex. They are a complex pattern of behavior.
Species-characteristic: Fixed action patterns occur in all members of a species of a certain sex and/or a given age when they have attained a specific level of arousal.
Released: Fixed action patterns occur in response to a certain sign stimulus or releaser.
Triggered: Once relea
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of behavior is frogs croaking or deer clashing antlers an example of?
A. courtship
B. mating
C. learned behavior
D. instincts
Answer:
|
|
sciq-954
|
multiple_choice
|
Trophic level 4 = tertiary consumers that eat what kind of consumers?
|
[
"primary consumers",
"secondary consumers",
"insects",
"herbivores"
] |
B
|
Relavent Documents:
Document 0:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 1:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 2:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 3:::
Trophic species are a scientific grouping of organisms according to their shared trophic (feeding) positions in a food web or food chain. Trophic species have identical prey and a shared set of predators in the food web. This means that members of a trophic species share many of the same kinds of ecological functions. The idea of trophic species was first devised by Joel Cohen and Frederick Briand in 1984 to redefine assessment of the ratio of predators to prey within a food web. The category may include species of plant, animal, a combination of plant and animal, and biological stages of an organism. The reassessment grouped similar species according to habit rather than genetics. This resulted in a ratio of predator to prey in food webs is generally 1:1. By assigning groups in a trophic manner, relationships are linear in scale. This allows for predicting the proportion of different trophic links in a community food web.
Document 4:::
Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat".
Evolutionary history
The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials).
Evolutionary adaptations
The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as:
mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc.
distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc.
specialized claws and other appendages, for apprehending or killing (including fingers in primates)
changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc.
changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis
Classification
By mode of ingestion
There are many modes of feeding that animals exhibit, including:
Filter feeding: obtaining nutrients from particles suspended in water
Deposit feeding: obtaining nutrients from particles suspended in soil
Fluid feeding: obtaining nutrients by consuming other organisms' fluids
Bulk feeding: obtaining nutrients by eating all of an organism.
Ram feeding and suction feeding: in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Trophic level 4 = tertiary consumers that eat what kind of consumers?
A. primary consumers
B. secondary consumers
C. insects
D. herbivores
Answer:
|
|
sciq-3048
|
multiple_choice
|
Varves form in lakes covered by what?
|
[
"coral reef",
"soot",
"ice",
"bridges"
] |
C
|
Relavent Documents:
Document 0:::
The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site.
The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration.
The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors.
History
The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day.
The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015.
In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work.
Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes.
By the start of 2017, there were more than 600 people working at the site.
In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 4:::
Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself.
Lake ecosystem
Geography
The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments.
Eutrophication experiment
To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Varves form in lakes covered by what?
A. coral reef
B. soot
C. ice
D. bridges
Answer:
|
|
sciq-3126
|
multiple_choice
|
What are organisms called, like the red-winged blackbird, that eat many different types of food?
|
[
"omniverous",
"generalists",
"specalist",
"carniverous"
] |
B
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
A generalist species is able to thrive in a wide variety of environmental conditions and can make use of a variety of different resources (for example, a heterotroph with a varied diet). A specialist species can thrive only in a narrow range of environmental conditions or has a limited diet. Most organisms do not all fit neatly into either group, however. Some species are highly specialized (the most extreme case being monophagous, eating one specific type of food), others less so, and some can tolerate many different environments. In other words, there is a continuum from highly specialized to broadly generalist species.
Description
Omnivores are usually generalists. Herbivores are often specialists, but those that eat a variety of plants may be considered generalists. A well-known example of a specialist animal is the monophagous koala, which subsists almost entirely on eucalyptus leaves. The raccoon is a generalist, because it has a natural range that includes most of North and Central America, and it is omnivorous, eating berries, insects such as butterflies, eggs, and various small animals.
The distinction between generalists and specialists is not limited to animals. For example, some plants require a narrow range of temperatures, soil conditions and precipitation to survive while others can tolerate a broader range of conditions. A cactus could be considered a specialist species. It will die during winters at high latitudes or if it receives too much water.
When body weight is controlled for, specialist feeders such as insectivores and frugivores have larger home ranges than generalists like some folivores (leaf-eaters), whose food-source is less abundant; they need a bigger area for foraging. An example comes from the research of Tim Clutton-Brock, who found that the black-and-white colobus, a folivore generalist, needs a home range of only 15 ha. On the other hand, the more specialized red colobus monkey has a home range of 70 ha, which it requires to
Document 2:::
Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat".
Evolutionary history
The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials).
Evolutionary adaptations
The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as:
mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc.
distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc.
specialized claws and other appendages, for apprehending or killing (including fingers in primates)
changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc.
changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis
Classification
By mode of ingestion
There are many modes of feeding that animals exhibit, including:
Filter feeding: obtaining nutrients from particles suspended in water
Deposit feeding: obtaining nutrients from particles suspended in soil
Fluid feeding: obtaining nutrients by consuming other organisms' fluids
Bulk feeding: obtaining nutrients by eating all of an organism.
Ram feeding and suction feeding: in
Document 3:::
A herbivore is an animal anatomically and physiologically adapted to eating plant material, for example foliage or marine algae, for the main component of its diet. As a result of their plant diet, herbivorous animals typically have mouthparts adapted to rasping or grinding. Horses and other herbivores have wide flat teeth that are adapted to grinding grass, tree bark, and other tough plant material.
A large percentage of herbivores have mutualistic gut flora that help them digest plant matter, which is more difficult to digest than animal prey. This flora is made up of cellulose-digesting protozoans or bacteria.
Etymology
Herbivore is the anglicized form of a modern Latin coinage, herbivora, cited in Charles Lyell's 1830 Principles of Geology. Richard Owen employed the anglicized term in an 1854 work on fossil teeth and skeletons. Herbivora is derived from Latin herba 'small plant, herb' and vora, from vorare 'to eat, devour'.
Definition and related terms
Herbivory is a form of consumption in which an organism principally eats autotrophs such as plants, algae and photosynthesizing bacteria. More generally, organisms that feed on autotrophs in general are known as primary consumers.
Herbivory is usually limited to animals that eat plants. Insect herbivory can cause a variety of physical and metabolic alterations in the way the host plant interacts with itself and other surrounding biotic factors. Fungi, bacteria, and protists that feed on living plants are usually termed plant pathogens (plant diseases), while fungi and microbes that feed on dead plants are described as saprotrophs. Flowering plants that obtain nutrition from other living plants are usually termed parasitic plants. There is, however, no single exclusive and definitive ecological classification of consumption patterns; each textbook has its own variations on the theme.
Evolution of herbivory
The understanding of herbivory in geological time comes from three sources: fossilized plants, which may
Document 4:::
An omnivore () is an animal that has the ability to eat and survive on both plant and animal matter. Obtaining energy and nutrients from plant and animal matter, omnivores digest carbohydrates, protein, fat, and fiber, and metabolize the nutrients and energy of the sources absorbed. Often, they have the ability to incorporate food sources such as algae, fungi, and bacteria into their diet.
Omnivores come from diverse backgrounds that often independently evolved sophisticated consumption capabilities. For instance, dogs evolved from primarily carnivorous organisms (Carnivora) while pigs evolved from primarily herbivorous organisms (Artiodactyla). Despite this, physical characteristics such as tooth morphology may be reliable indicators of diet in mammals, with such morphological adaptation having been observed in bears.
The variety of different animals that are classified as omnivores can be placed into further sub-categories depending on their feeding behaviors. Frugivores include cassowaries, orangutans and grey parrots; insectivores include swallows and pink fairy armadillos; granivores include large ground finches and mice.
All of these animals are omnivores, yet still fall into special niches in terms of feeding behavior and preferred foods. Being omnivores gives these animals more food security in stressful times or makes possible living in less consistent environments.
Etymology and definitions
The word omnivore derives from Latin omnis 'all' and vora, from vorare 'to eat or devour', having been coined by the French and later adopted by the English in the 1800s. Traditionally the definition for omnivory was entirely behavioral by means of simply "including both animal and vegetable tissue in the diet." In more recent times, with the advent of advanced technological capabilities in fields like gastroenterology, biologists have formulated a standardized variation of omnivore used for labeling a species' actual ability to obtain energy and nutrients from ma
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are organisms called, like the red-winged blackbird, that eat many different types of food?
A. omniverous
B. generalists
C. specalist
D. carniverous
Answer:
|
|
ai2_arc-29
|
multiple_choice
|
What characteristic of DNA results in cell differentiation in developing embryos?
|
[
"which genes are present",
"how many copies of each gene are present",
"which genes are active",
"what protein is produced by a gene"
] |
C
|
Relavent Documents:
Document 0:::
Embryomics is the identification, characterization and study of the diverse cell types which arise during embryogenesis, especially as this relates to the location and developmental history of cells in the embryo. Cell type may be determined according to several criteria: location in the developing embryo, gene expression as indicated by protein and nucleic acid markers and surface antigens, and also position on the embryogenic tree.
Embryome
There are many cell markers useful in distinguishing, classifying, separating and purifying the numerous cell types present at any given time in a developing organism. These cell markers consist of select RNAs and proteins present inside, and surface antigens present on the surface of, the cells making up the embryo. For any given cell type, these RNA and protein markers reflect the genes characteristically active in that cell type. The catalog of all these cell types and their characteristic markers is known as the organism's embryome. The word is a portmanteau of embryo and genome. “Embryome” may also refer to the totality of the physical cell markers themselves.
Embryogenesis
As an embryo develops from a fertilized egg, the single egg cell splits into many cells, which grow in number and migrate to the appropriate locations inside the embryo at appropriate times during development. As the embryo's cells grow in number and migrate, they also differentiate into an increasing number of different cell types, ultimately turning into the stable, specialized cell types characteristic of the adult organism. Each of the cells in an embryo contains the same genome, characteristic of the species, but the level of activity of each of the many thousands of genes that make up the complete genome varies with, and determines, a particular cell's type (e.g. neuron, bone cell, skin cell, muscle cell, etc.).
During embryo development (embryogenesis), many cell types are present which are not present in the adult organism. These temporary c
Document 1:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 2:::
Evolutionary developmental biology (informally, evo-devo) is a field of biological research that compares the developmental processes of different organisms to infer how developmental processes evolved.
The field grew from 19th-century beginnings, where embryology faced a mystery: zoologists did not know how embryonic development was controlled at the molecular level. Charles Darwin noted that having similar embryos implied common ancestry, but little progress was made until the 1970s. Then, recombinant DNA technology at last brought embryology together with molecular genetics. A key early discovery was of homeotic genes that regulate development in a wide range of eukaryotes.
The field is composed of multiple core evolutionary concepts. One is deep homology, the finding that dissimilar organs such as the eyes of insects, vertebrates and cephalopod molluscs, long thought to have evolved separately, are controlled by similar genes such as pax-6, from the evo-devo gene toolkit. These genes are ancient, being highly conserved among phyla; they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Another is that species do not differ much in their structural genes, such as those coding for enzymes; what does differ is the way that gene expression is regulated by the toolkit genes. These genes are reused, unchanged, many times in different parts of the embryo and at different stages of development, forming a complex cascade of control, switching other regulatory genes as well as structural genes on and off in a precise pattern. This multiple pleiotropic reuse explains why these genes are highly conserved, as any change would have many adverse consequences which natural selection would oppose.
New morphological features and ultimately new species are produced by variations in the toolkit, either when genes are expressed in a new pattern, or when toolkit genes acquire additional functions. Another possibility
Document 3:::
According to the principle of nuclear equivalence, the nuclei of essentially all differentiated adult cells of an individual are genetically (though not necessarily metabolically) identical to one another and to the nucleus of the zygote from which they descended. This means that virtually all somatic cells in an adult have the same genes. However, different cells express different subsets of these genes.
The evidence for nuclear equivalence comes from cases in which differentiated cells or their nuclei have been found to retain the potential of directing the development of the entire organism. Such cells or nuclei are said to exhibit totipotency.
Document 4:::
Cell potency is a cell's ability to differentiate into other cell types.
The more cell types a cell can differentiate into, the greater its potency. Potency is also described as the gene activation potential within a cell, which like a continuum, begins with totipotency to designate a cell with the most differentiation potential, pluripotency, multipotency, oligopotency, and finally unipotency.
Totipotency
Totipotency (Lat. totipotentia, "ability for all [things]") is the ability of a single cell to divide and produce all of the differentiated cells in an organism. Spores and zygotes are examples of totipotent cells.
In the spectrum of cell potency, totipotency represents the cell with the greatest differentiation potential, being able to differentiate into any embryonic cell, as well as any extraembryonic cell. In contrast, pluripotent cells can only differentiate into embryonic cells.
A fully differentiated cell can return to a state of totipotency. The conversion to totipotency is complex and not fully understood. In 2011, research revealed that cells may differentiate not into a fully totipotent cell, but instead into a "complex cellular variation" of totipotency. Stem cells resembling totipotent blastomeres from 2-cell stage embryos can arise spontaneously in mouse embryonic stem cell cultures and also can be induced to arise more frequently in vitro through down-regulation of the chromatin assembly activity of CAF-1.
The human development model can be used to describe how totipotent cells arise. Human development begins when a sperm fertilizes an egg and the resulting fertilized egg creates a single totipotent cell, a zygote. In the first hours after fertilization, this zygote divides into identical totipotent cells, which can later develop into any of the three germ layers of a human (endoderm, mesoderm, or ectoderm), or into cells of the placenta (cytotrophoblast or syncytiotrophoblast). After reaching a 16-cell stage, the totipotent cells of the morula d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What characteristic of DNA results in cell differentiation in developing embryos?
A. which genes are present
B. how many copies of each gene are present
C. which genes are active
D. what protein is produced by a gene
Answer:
|
|
sciq-5294
|
multiple_choice
|
What do organisms use to store energy?
|
[
"proteins",
"tissues",
"metabolytes",
"lipids"
] |
D
|
Relavent Documents:
Document 0:::
An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds.
Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as:
P = C - R - U - F or
P = C - (R + U + F) or
C = P + R + U + F
All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ).
Energy used for metabolism will be
R = C - (F + U + P)
Energy used in the maintenance will be
R + F + U = C - P
Endothermy and ectothermy
Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms.
Document 1:::
Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management.
Constituents of diet
Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt
Document 2:::
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha
Document 3:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 4:::
In biology, energy homeostasis, or the homeostatic control of energy balance, is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow). The human brain, particularly the hypothalamus, plays a central role in regulating energy homeostasis and generating the sense of hunger by integrating a number of biochemical signals that transmit information about energy balance. Fifty percent of the energy from glucose metabolism is immediately converted to heat.
Energy homeostasis is an important aspect of bioenergetics.
Definition
In the US, biological energy is expressed using the energy unit Calorie with a capital C (i.e. a kilocalorie), which equals the energy needed to increase the temperature of 1 kilogram of water by 1 °C (about 4.18 kJ).
Energy balance, through biosynthetic reactions, can be measured with the following equation:
Energy intake (from food and fluids) = Energy expended (through work and heat generated) + Change in stored energy (body fat and glycogen storage)
The first law of thermodynamics states that energy can be neither created nor destroyed. But energy can be converted from one form of energy to another. So, when a calorie of food energy is consumed, one of three particular effects occur within the body: a portion of that calorie may be stored as body fat, triglycerides, or glycogen, transferred to cells and converted to chemical energy in the form of adenosine triphosphate (ATP – a coenzyme) or related compounds, or dissipated as heat.
Energy
Intake
Energy intake is measured by the amount of calories consumed from food and fluids. Energy intake is modulated by hunger, which is primarily regulated by the hypothalamus, and choice, which is determined by the sets of brain structures that are responsible for stimulus control (i.e., operant conditioning and classical conditioning) and cognitive control of eating behavior. Hunger is regulated in part by the act
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do organisms use to store energy?
A. proteins
B. tissues
C. metabolytes
D. lipids
Answer:
|
|
sciq-5008
|
multiple_choice
|
Receptors for what chemical messengers recognize molecules with specific shapes and side groups, and respond only to those that are recognized?
|
[
"acids",
"enzymes",
"chromosomes",
"hormones"
] |
D
|
Relavent Documents:
Document 0:::
In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway.
Receptor proteins can be classified by their location. Cell surface receptors also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits th
Document 1:::
The adequate stimulus is a property of a sensory receptor that determines the type of energy to which a sensory receptor responds with the initiation of sensory transduction. Sensory receptor are specialized to respond to certain types of stimuli. The adequate stimulus is the amount and type of energy required to stimulate a specific sensory organ.
Many of the sensory stimuli are categorized by the mechanics by which they are able to function and their purpose. Sensory receptors that are present within the body typically are made to respond to a single stimulus. Sensory receptors are present all throughout the body, and they take a certain amount of a stimulus to trigger these receptors. The use of these sensory receptors allows the brain to interpret the signals to the body which allow a person to respond to the stimulus if the stimulus reaches a minimum threshold to signal the brain. The sensory receptors will activate the sensory transduction system which will in turn send an electrical or chemical stimulus to a cell, and the cell will then respond with electrical signals to the brain which were produced from action potentials. The minuscule signals, which result from the stimuli, enter the cells must be amplified and turned into an sufficient signal that will be sent to the brain.
A sensory receptor's adequate stimulus is determined by the signal transduction mechanisms and ion channels incorporated in the sensory receptor's plasma membrane. Adequate stimulus are often used in relation with sensory thresholds and absolute thresholds to describe the smallest amount of a stimulus needed to activate a feeling within the sensory organ.
Categorizations of receptors
They are categorized through the stimuli to which they respond. Adequate stimulus are also often categorized based on their purpose and locations within the body. The following are the categorizations of receptors within the body:
Visual – These are found in the visual organs of species and are respon
Document 2:::
In biology, cell signaling (cell signalling in British English) or cell communication is the ability of a cell to receive, process, and transmit signals with its environment and with itself. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes. Signals that originate from outside a cell (or extracellular signals) can be physical agents like mechanical pressure, voltage, temperature, light, or chemical signals (e.g., small molecules, peptides, or gas). Cell signaling can occur over short or long distances, and as a result can be classified as autocrine, juxtacrine, intracrine, paracrine, or endocrine. Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage.
Receptors play a key role in cell signaling as they are able to detect chemical signals or physical stimuli. Receptors are generally proteins located on the cell surface or within the interior of the cell such as the cytoplasm, organelles, and nucleus. Cell surface receptors usually bind with extracellular signals (or ligands), which causes a conformational change in the receptor that leads it to initiate enzymic activity, or to open or close ion channel activity. Some receptors do not contain enzymatic or channel-like domains but are instead linked to enzymes or transporters. Other intracellular receptors like nuclear receptors have a different mechanism such as changing their DNA binding properties and cellular localization to the nucleus.
Signal transduction begins with the transformation (or transduction) of a signal into a chemical one, which can directly activate an ion channel (ligand-gated ion channel) or initiate a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial sig
Document 3:::
A heteromer is something that consists of different parts; the antonym of homomeric. Examples are:
Biology
Spinal neurons that pass over to the opposite side of the spinal cord.
A protein complex that contains two or more different polypeptides.
Pharmacology
Ligand-gated ion channels such as the nicotinic acetylcholine receptor and GABAA receptor are composed of five subunits arranged around a central pore that opens to allow ions to pass through. There are many different subunits available that can come together in a wide variety of combinations to form different subtypes of the ion channel. Sometimes the channel can be made from only one type of subunit, such as the α7 nicotinic receptor, which is made up from five α7 subunits, and so is a homomer rather than a heteromer, but more commonly several different types of subunit will come together to form a heteromeric complex (e.g., the α4β2 nicotinic receptor, which is made up from two α4 subunits and three β2 subunits). Because the different ion channel subtypes are expressed to different extents in different tissues, this allows selective modulation of ion transport and means that a single neurotransmitter can produce varying effects depending on where in the body it is released.
G protein-coupled receptors are composed of seven membrane-spanning alpha-helical segments that are usually linked together into a single folded chain to form the receptor complex. However, research has demonstrated that a number of GPCRs are also capable of forming heteromers from a combination of two or more individual GPCR subunits under some circumstances, especially where several different GPCRs are densely expressed in the same neuron. Such heteromers may be between receptors from the same family (e.g., adenosine A1/A2A heteromers and dopamine D1/D2 and D1/D3 heteromers) or between entirely unrelated receptors such as CB1/A2A, glutamate mGluR5 / adenosine A2A heteromers, cannabinoid CB1 / dopamine D2 heteromers, and even CB1/A2
Document 4:::
In physiology, a stimulus is a detectable change in the physical or chemical structure of an organism's internal or external environment. The ability of an organism or organ to detect external stimuli, so that an appropriate reaction can be made, is called sensitivity (excitability). Sensory receptors can receive information from outside the body, as in touch receptors found in the skin or light receptors in the eye, as well as from inside the body, as in chemoreceptors and mechanoreceptors. When a stimulus is detected by a sensory receptor, it can elicit a reflex via stimulus transduction. An internal stimulus is often the first component of a homeostatic control system. External stimuli are capable of producing systemic responses throughout the body, as in the fight-or-flight response. In order for a stimulus to be detected with high probability, its level of strength must exceed the absolute threshold; if a signal does reach threshold, the information is transmitted to the central nervous system (CNS), where it is integrated and a decision on how to react is made. Although stimuli commonly cause the body to respond, it is the CNS that finally determines whether a signal causes a reaction or not.
Types
Internal
Homeostatic imbalances
Homeostatic outbalances are the main driving force for changes of the body. These stimuli are monitored closely by receptors and sensors in different parts of the body. These sensors are mechanoreceptors, chemoreceptors and thermoreceptors that, respectively, respond to pressure or stretching, chemical changes, or temperature changes. Examples of mechanoreceptors include baroreceptors which detect changes in blood pressure, Merkel's discs which can detect sustained touch and pressure, and hair cells which detect sound stimuli. Homeostatic imbalances that can serve as internal stimuli include nutrient and ion levels in the blood, oxygen levels, and water levels. Deviations from the homeostatic ideal may generate a homeostatic emotio
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Receptors for what chemical messengers recognize molecules with specific shapes and side groups, and respond only to those that are recognized?
A. acids
B. enzymes
C. chromosomes
D. hormones
Answer:
|
|
sciq-871
|
multiple_choice
|
In what form of lipids cells store energy for long-term use?
|
[
"nuts",
"fat",
"meat",
"treasure"
] |
B
|
Relavent Documents:
Document 0:::
The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome.
The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer.
See also
Genome
Proteome
Glycome
Document 1:::
Fat globules (also known as mature lipid droplets) are individual pieces of intracellular fat in human cell biology. The lipid droplet's function is to store energy for the organism's body and is found in every type of adipocytes. They can consist of a vacuole, droplet of triglyceride, or any other blood lipid, as opposed to fat cells in between other cells in an organ. They contain a hydrophobic core and are encased in a phospholipid monolayer membrane. Due to their hydrophobic nature, lipids and lipid digestive derivatives must be transported in the globular form within the cell, blood, and tissue spaces.
The formation of a fat globule starts within the membrane bilayer of the endoplasmic reticulum. It starts as a bud and detaches from the ER membrane to join other droplets. After the droplets fuse, a mature droplet (full-fledged globule) is formed and can then partake in neutral lipid synthesis or lipolysis.
Globules of fat are emulsified in the duodenum into smaller droplets by bile salts during food digestion, speeding up the rate of digestion by the enzyme lipase at a later point in digestion. Bile salts possess detergent properties that allow them to emulsify fat globules into smaller emulsion droplets, and then into even smaller micelles. This increases the surface area for lipid-hydrolyzing enzymes to act on the fats.
Micelles are roughly 200 times smaller than fat emulsion droplets, allowing them to facilitate the transport of monoglycerides and fatty acids across the surface of the enterocyte, where absorption occurs.
Milk fat globules (MFGs) are another form of intracellular fat found in the mammary glands of female mammals. Their function is to provide enriching glycoproteins from the female to their offspring. They are formed in the endoplasmic reticulum found in the mammary epithelial lactating cell. The globules are made up of triacylglycerols encased in cellular membranes and proteins like adipophilin and TIP 47. The proteins are spread througho
Document 2:::
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids.
By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol.
Applications
Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel.
See also
Lipids
Simple lipid
Document 3:::
Fatty acid metabolism consists of various metabolic processes involving or closely related to fatty acids, a family of molecules classified within the lipid macronutrient category. These processes can mainly be divided into (1) catabolic processes that generate energy and (2) anabolic processes where they serve as building blocks for other compounds.
In catabolism, fatty acids are metabolized to produce energy, mainly in the form of adenosine triphosphate (ATP). When compared to other macronutrient classes (carbohydrates and protein), fatty acids yield the most ATP on an energy per gram basis, when they are completely oxidized to CO2 and water by beta oxidation and the citric acid cycle. Fatty acids (mainly in the form of triglycerides) are therefore the foremost storage form of fuel in most animals, and to a lesser extent in plants.
In anabolism, intact fatty acids are important precursors to triglycerides, phospholipids, second messengers, hormones and ketone bodies. For example, phospholipids form the phospholipid bilayers out of which all the membranes of the cell are constructed from fatty acids. Phospholipids comprise the plasma membrane and other membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus. In another type of anabolism, fatty acids are modified to form other compounds such as second messengers and local hormones. The prostaglandins made from arachidonic acid stored in the cell membrane are probably the best-known of these local hormones.
Fatty acid catabolism
Fatty acids are stored as triglycerides in the fat depots of adipose tissue. Between meals they are released as follows:
Lipolysis, the removal of the fatty acid chains from the glycerol to which they are bound in their storage form as triglycerides (or fats), is carried out by lipases. These lipases are activated by high epinephrine and glucagon levels in the blood (or norepinephrine secreted by s
Document 4:::
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond.
A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation.
Composition of common fats
In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography.
The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component.
Chemistry and nutrition
Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective.
Examples
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In what form of lipids cells store energy for long-term use?
A. nuts
B. fat
C. meat
D. treasure
Answer:
|
|
sciq-6025
|
multiple_choice
|
Thomson’s plum pudding model shows the structure of what?
|
[
"nucleus",
"DNA",
"atom",
"cell"
] |
C
|
Relavent Documents:
Document 0:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 1:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Thomson’s plum pudding model shows the structure of what?
A. nucleus
B. DNA
C. atom
D. cell
Answer:
|
|
sciq-2430
|
multiple_choice
|
Which country is formed by a hotspot along the mid-atlantic ridge?
|
[
"finland",
"norway",
"Switzerland",
"iceland"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
The Hawaiian Islands (Hawaiian: Nā Moku o Hawai‘i) are an archipelago of eight major volcanic islands, several atolls, and numerous smaller islets in the North Pacific Ocean, extending some from the island of Hawaiʻi in the south to northernmost Kure Atoll. Formerly called the Sandwich Islands, the present name for the archipelago is derived from the name of its largest island, Hawaiʻi.
The archipelago sits on the Pacific Plate. The islands are exposed peaks of a great undersea mountain range known as the Hawaiian–Emperor seamount chain, formed by volcanic activity over a hotspot in the Earth's mantle. The islands are about from the nearest continent and are part of the Polynesia subregion of Oceania.
The U.S. state of Hawaii occupies the archipelago almost in its entirety (including the mostly uninhabited Northwestern Hawaiian Islands), with the sole exception of Midway Atoll (a United States Minor Outlying Island). Hawaii is the only U.S. state that is situated entirely on an archipelago, and the only state not geographically connected with North America. The Northwestern islands (sometimes called the Leeward Islands) and surrounding seas are protected as a National Monument and World Heritage Site.
Islands and reefs
The date of the first settlements of the Hawaiian Islands is a topic of continuing debate. Archaeological evidence seems to indicate a settlement as early as 124 AD.
Captain James Cook, RN, visited the islands on January 18, 1778, and named them the "Sandwich Islands" in honor of The 4th Earl of Sandwich, who as the First Lord of the Admiralty was one of his sponsors. This name was in use until the 1840s, when the local name "Hawaii" gradually began to take precedence.
The Hawaiian Islands have a total land area of . Except for Midway, which is an unincorporated territory of the United States, these islands and islets are administered as Hawaii—the 50th state of the United States.
Major islands
The eight major islands of Hawaii (Windward Is
Document 4:::
The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools.
ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which country is formed by a hotspot along the mid-atlantic ridge?
A. finland
B. norway
C. Switzerland
D. iceland
Answer:
|
|
sciq-8286
|
multiple_choice
|
Primary batteries are single-use batteries because they cannot be what?
|
[
"recharged",
"refreshed",
"UP ticked.",
"plugged"
] |
A
|
Relavent Documents:
Document 0:::
A battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons that will flow through an external electric circuit to the positive terminal. When a battery is connected to an external electric load, a redox reaction converts high-energy reactants to lower-energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell.
Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones.
Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines.
History
Invention
Benjamin Franklin first used the term "battery" in 1749 wh
Document 1:::
A primary battery or primary cell is a battery (a galvanic cell) that is designed to be used once and discarded, and not recharged with electricity and reused like a secondary cell (rechargeable battery). In general, the electrochemical reaction occurring in the cell is not reversible, rendering the cell unrechargeable. As a primary cell is used, chemical reactions in the battery use up the chemicals that generate the power; when they are gone, the battery stops producing electricity. In contrast, in a secondary cell, the reaction can be reversed by running a current into the cell with a battery charger to recharge it, regenerating the chemical reactants. Primary cells are made in a range of standard sizes to power small household appliances such as flashlights and portable radios.
Primary batteries make up about 90% of the $50 billion battery market, but secondary batteries have been gaining market share. About 15 billion primary batteries are thrown away worldwide every year, virtually all ending up in landfills. Due to the toxic heavy metals and strong acids and alkalis they contain, batteries are hazardous waste. Most municipalities classify them as such and require separate disposal. The energy needed to manufacture a battery is about 50 times greater than the energy it contains. Due to their high pollutant content compared to their small energy content, the primary battery is considered a wasteful, environmentally unfriendly technology. Due mainly to increasing sales of wireless devices and cordless tools which cannot be economically powered by primary batteries and come with integral rechargeable batteries, the secondary battery industry has high growth and has slowly been replacing the primary battery in high end products.
Usage trend
In the early twenty-first century, primary cells began losing market share to secondary cells, as relative costs declined for the latter. Flashlight power demands were reduced by the switch from incandescent bulbs to light-em
Document 2:::
A rechargeable battery, storage battery, or secondary cell (formally a type of energy accumulator), is a type of electrical battery which can be charged, discharged into a load, and recharged many times, as opposed to a disposable or primary battery, which is supplied fully charged and discarded after use. It is composed of one or more electrochemical cells. The term "accumulator" is used as it accumulates and stores energy through a reversible electrochemical reaction. Rechargeable batteries are produced in many different shapes and sizes, ranging from button cells to megawatt systems connected to stabilize an electrical distribution network. Several different combinations of electrode materials and electrolytes are used, including lead–acid, zinc–air, nickel–cadmium (NiCd), nickel–metal hydride (NiMH), lithium-ion (Li-ion), lithium iron phosphate (LiFePO4), and lithium-ion polymer (Li-ion polymer).
Rechargeable batteries typically initially cost more than disposable batteries but have a much lower total cost of ownership and environmental impact, as they can be recharged inexpensively many times before they need replacing. Some rechargeable battery types are available in the same sizes and voltages as disposable types, and can be used interchangeably with them. Billions of dollars in research are being invested around the world for improving batteries and industry also focuses on building better batteries. Some characteristics of rechargeable battery are given below:
In rechargeable batteries, energy is induced by applying an external source to the chemical substances.
The chemical reaction that occurs in them is reversible.
Internal resistance is comparatively low.
They have a high self-discharge rate comparatively.
They have a bulky and complex design.
They have high resell value.
Applications
Devices which use rechargeable batteries include automobile starters, portable consumer devices, light vehicles (such as motorized wheelchairs, golf carts, e
Document 3:::
A battery room is a room that houses batteries for backup or uninterruptible power systems. The rooms are found in telecommunication central offices, and provide standby power for computing equipment in datacenters. Batteries provide direct current (DC) electricity, which may be used directly by some types of equipment, or which may be converted to alternating current (AC) by uninterruptible power supply (UPS) equipment. The batteries may provide power for minutes, hours or days, depending on each system's design, although they are most commonly activated during brief electric utility outages lasting only seconds.
Battery rooms were used to segregate the fumes and corrosive chemicals of wet cell batteries (often lead–acid) from the operating equipment, and for better control of temperature and ventilation. In 1890, the Western Union central telegraph office in New York City had 20,000 wet cells, mostly of the primary zinc-copper type.
Telecommunications
Telephone system central offices contain large battery systems to provide power for customer telephones, telephone switches, and related apparatus. Terrestrial microwave links, cellular telephone sites, fibre optic apparatus and satellite communications facilities also have standby battery systems, which may be large enough to occupy a separate room in the building. In normal operation power from the local commercial utility operates telecommunication equipment, and batteries provide power if the normal supply is interrupted. These can be sized for the expected full duration of an interruption, or may be required only to provide power while a standby generator set or other emergency power supply is started.
Batteries often used in battery rooms are the flooded lead-acid battery, the valve regulated lead-acid battery or the nickel–cadmium battery. Batteries are installed in groups. Several batteries are wired together in a series circuit forming a group providing DC electric power at 12, 24, 48 or 60 volts (or
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Primary batteries are single-use batteries because they cannot be what?
A. recharged
B. refreshed
C. UP ticked.
D. plugged
Answer:
|
|
sciq-3622
|
multiple_choice
|
The seasonal movements of animals from one area to another is referred to as?
|
[
"hybernation",
"migration",
"mitigation",
"echolocation"
] |
B
|
Relavent Documents:
Document 0:::
Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating.
To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern.
Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices.
Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles.
Overview
Concepts
Migration can take very different forms in different species, and has a variety of causes.
As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is
Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Wi
Document 1:::
Migration, in ecology, is the large-scale movement of members of a species to a different environment. Migration is a natural behavior and component of the life cycle of many species of mobile organisms, not limited to animals, though animal migration is the best known type. Migration is often cyclical, frequently occurring on a seasonal basis, and in some cases on a daily basis. Species migrate to take advantage of more favorable conditions with respect to food availability, safety from predation, mating opportunity, or other environmental factors.
Migration is most commonly seen as animal migration, the physical movement by animals from one area to another. That includes bird, fish, and insect migration. However, plants can be said to migrate, as seed dispersal enables plants to grow in new areas, under environmental constraints such as temperature and rainfall, resulting in changes such as forest migration.
Mechanisms
While members of some species learn a migratory route on their first journey with older members of their group, other species genetically pass on information regarding their migratory paths. Despite many differences in organisms’ migratory cues and behaviors, “considerable similarities appear to exist in the cues involved in the different phases of migration.” Migratory organisms use environmental cues like photoperiod and weather conditions as well as internal cues like hormone levels to determine when it is time to begin a migration. Migratory species use senses such as magnetoreception or olfaction to orient themselves or navigate their route, respectively.
Factors
The factors that determine migration methods are variable due to the inconsistency of major seasonal changes and events. When an organism migrates from one location to another, its energy use and rate of migration are directly related to each other and to the safety of the organism. If an ecological barrier presents itself along a migrant's route, the migrant can either choose t
Document 2:::
Bird migration is the regular seasonal movement, often north and south, along a flyway, between breeding and wintering grounds. Many species of bird migrate. Migration carries high costs in predation and mortality, including from hunting by humans, and is driven primarily by the availability of food. It occurs mainly in the northern hemisphere, where birds are funnelled onto specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea.
Migration of species such as storks, turtle doves, and swallows was recorded as many as 3,000 years ago by Ancient Greek authors, including Homer and Aristotle, and in the Book of Job. More recently, Johannes Leche began recording dates of arrivals of spring migrants in Finland in 1749, and modern scientific studies have used techniques including bird ringing and satellite tracking to trace migrants. Threats to migratory birds have grown with habitat destruction, especially of stopover and wintering sites, as well as structures such as power lines and wind farms.
The Arctic tern holds the long-distance migration record for birds, travelling between Arctic breeding grounds and the Antarctic each year. Some species of tubenoses (Procellariiformes) such as albatrosses circle the Earth, flying over the southern oceans, while others such as Manx shearwaters migrate between their northern breeding grounds and the southern ocean. Shorter migrations are common, while longer ones are not. The shorter migrations include altitudinal migrations on mountains such as the Andes and Himalayas.
The timing of migration seems to be controlled primarily by changes in day length. Migrating birds navigate using celestial cues from the Sun and stars, the Earth's magnetic field, and mental maps.
Historical views
In the Pacific, traditional land-finding techniques used by Micronesians and Polynesians suggest that bird migration was observed and interpreted for more than 3,000 years. In Samoan tradition, for example, Tagaloa sent
Document 3:::
Central place foraging (CPF) theory is an evolutionary ecology model for analyzing how an organism can maximize foraging rates while traveling through a patch (a discrete resource concentration), but maintains the key distinction of a forager traveling from a home base to a distant foraging location rather than simply passing through an area or travelling at random. CPF was initially developed to explain how red-winged blackbirds might maximize energy returns when traveling to and from a nest. The model has been further refined and used by anthropologists studying human behavioral ecology and archaeology.
Case studies
Central place foraging in non-human animals
Orians and Pearson (1979) found that red-winged blackbirds in eastern Washington State tend to capture a larger number of single species prey items per trip compared to the same species in Costa Rica, which brought back large, single insects. Foraging specialization by Costa Rican blackbirds was attributed to increased search and handling costs of nocturnal foraging, whereas birds in Eastern Washington forage diurnally for prey with lower search and handling costs. Studies with sea birds and seals have also found that load size tends to increase with foraging distance from the nest, as predicted by CPF. Other central place foragers, such as social insects, also show support for CPF theory. European honeybees increase their nectar load as travel time to nectar sites from a hive increases. Beavers have been found to preferentially collect larger diameter trees as distance from their lodge increases.
Archaeological case study: acorns and mussels in California
To apply the central place foraging model to ethnographic and experimental archaeological data driven by middle range theory, Bettinger et al. (1997) simplify the Barlow and Metcalf (1996) central place model to explore the archaeological implications of acorn (Quercus kelloggii) and mussel (Mytilus californianus) procurement and processing. This m
Document 4:::
Diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. The common adjective used for daytime activity is "diurnal". The timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. Animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral.
Plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths.
In animals
Many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. Commonly classified diurnal animals include mammals, birds, and reptiles. Most primates are diurnal, including humans. Scientifically classifying diurnality within animals can be a challenge, apart from the obvious increased activity levels during the day time light.
Evolution of diurnality
Initially, most animals were diurnal, but adaptations that allowed some animals to become nocturnal is what helped contribute to the success of many, especially mammals. This evolutionary movement to nocturnality allowed them to better avoid predators and gain resources with less competition from other animals. This did come with some adaptations that mammals live with today. Visi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The seasonal movements of animals from one area to another is referred to as?
A. hybernation
B. migration
C. mitigation
D. echolocation
Answer:
|
|
sciq-5854
|
multiple_choice
|
What ancestors did caecillians evolve from?
|
[
"hominid",
"tetrapod",
"ornithopod",
"arthropod"
] |
B
|
Relavent Documents:
Document 0:::
Eocaecilia is an extinct genus of gymnophionan amphibian from the early Jurassic Kayenta Formation of Arizona, United States. One species is described, Eocaecilia micropodia.
Eocaecilia shared some characteristics with salamanders and the now extinct microsaur amphibians. It was of small size, about 15 cm in length. Unlike modern caecilians, which are legless, Eocaecilia possessed small legs, and while modern caecilians have poorly developed eyes and spend a lot of time under ground, Eocaecilia'''s eyes were somewhat better developed. Although the precise ancestry of Eocaecilia'' is debated (and other caecilians by extension), it likely resided among the ancestral lepospondyl or temnospondyl amphibians of the Paleozoic and Mesozoic.
Document 1:::
Centroneuralia is a proposed clade of animals with bilateral symmetry as an embryo, consisting of the Chordata and Protostomia, united by the presence of a central nervous system. An alternative to the traditional protostome-deuterostome dichotomy, it has found weak support in several studies. Under this hypothesis, Centroneuralia would be sister to Xenambulacraria (Xenacoelomorpha + Ambulacraria) at the base of Bilateria.
Centroneuralia, as a proposed clade, originates in phylogenomics. More precisely, recent studies correlate support for Deuterostomia with simpler, site-homogeneous models, while more sophisticated and site-heterogeneous models recover Centroneuralia more often.
Phylogeny
Document 2:::
The (pan)arthropod head problem is a long-standing zoological dispute concerning the segmental composition of the heads of the various arthropod groups, and how they are evolutionarily related to each other. While the dispute has historically centered on the exact make-up of the insect head, it has been widened to include other living arthropods, such as chelicerates, myriapods, and crustaceans, as well as fossil forms, such as the many arthropods known from exceptionally preserved Cambrian faunas. While the topic has classically been based on insect embryology, in recent years a great deal of developmental molecular data has become available. Dozens of more or less distinct solutions to the problem, dating back to at least 1897, have been published, including several in the 2000s.
The arthropod head problem is popularly known as the endless dispute, the title of a famous paper on the subject by Jacob G. Rempel in 1975, referring to its seemingly intractable nature. Although some progress has been made since that time, the precise nature of especially the labrum and the pre-oral region of arthropods remain highly controversial.
Background
Some key events in the evolution of the arthropod body resulted from changes in certain Hox genes' DNA sequences. The trunks of arthropods comprise repeated segments, which are typically associated with various structures such as a pair of appendages, apodemes for muscle attachment, ganglia and (at least embryologically) coelomic cavities. While many arthropod segments are modified to a greater or lesser extent (for example, only three of the insect thorax and abdominal segments typically bear appendages), arthropodists widely assume that all of the segments were nearly identical in the ancestral state. However, while one can usually readily see the segmental organisation of the trunks of adult arthropods, that of the head is much less obvious. Arthropod heads are typically fused capsules that bear a variety of complex struc
Document 3:::
The Charles Schuchert Award is presented by the Paleontological Society to a person under 40 whose work reflects excellence and promise in the science of paleontology. The award was made in honor of Charles Schuchert (1858 – 1942), an American invertebrate paleontologist.
Awardees
Source: Paleontological Society
2021: Melanie Hopkins
2020: Lee Hsiang Liow
2019: Jingmai O'Connor
2018: Seth Finnegan
2017: Caroline Strömberg
2016: Alycia Stigall
2015: Jonathan Payne
2014: Shanan Peters
2013: Bridget Wade
2012: Gene Hunt
2011: C. Kevin Boyce
2010: Philip Donoghue
2009: Tom Olszewski
2008: Michael Engel
2007: John Alroy
2006: Shuhai Xiao
2005: Michal Kowalewski
2004: Peter J. Wagner
2003: Steven M. Holland
2002: Bruce S. Lieberman
2001: Loren E. Babcock
2000: Michael J. Foote
1999: Charles R. Marshall
1998: Paul L. Koch
1997: Mary L. Droser
1996: Douglas H. Erwin
1995: Susan M. Kidwell
1994: Christopher G. Maples
1993: Peter R. Crane
1992: Stephen J. Culver
1991: Donald R. Prothero
1990: William I. Ausich & Carlton E. Brett
1989: Simon Conway Morris
1988: David Jablonski
1987: Andrew H. Knoll
1986: John A. Barron
1985: Jennifer A. Kitchell
1984: Daniel C. Fisher
1983: J. John Sepkoski, Jr.
1982: James Sprinkle
1981: Philip D. Gingerich
1980: James Doyle
1979: R. Niles Eldredge
1978: Robert L. Carroll
1977: Steven M. Stanley
1976: Thomas J. M. Schopf
1975: Stephen Jay Gould
1974: James W. Schopf
1973: David M. Raup
See also
List of paleontology awards
Document 4:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What ancestors did caecillians evolve from?
A. hominid
B. tetrapod
C. ornithopod
D. arthropod
Answer:
|
|
sciq-6076
|
multiple_choice
|
What is the term for rain consisting of water with a ph below 5?
|
[
"carbon rain",
"produce rain",
"acid rain",
"Hot Rain"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Surface runoff (also known as overland flow or terrestrial runoff) is the unconfined flow of water over the ground surface, in contrast to channel runoff (or stream flow). It occurs when excess rainwater, stormwater, meltwater, or other sources, can no longer sufficiently rapidly infiltrate in the soil. This can occur when the soil is saturated by water to its full capacity, and the rain arrives more quickly than the soil can absorb it. Surface runoff often occurs because impervious areas (such as roofs and pavement) do not allow water to soak into the ground. Furthermore, runoff can occur either through natural or human-made processes.
Surface runoff is a major component of the water cycle. It is the primary agent of soil erosion by water. The land area producing runoff that drains to a common point is called a drainage basin.
Runoff that occurs on the ground surface before reaching a channel can be a nonpoint source of pollution, as it can carry human-made contaminants or natural forms of pollution (such as rotting leaves). Human-made contaminants in runoff include petroleum, pesticides, fertilizers and others. Much agricultural pollution is exacerbated by surface runoff, leading to a number of down stream impacts, including nutrient pollution that causes eutrophication.
In addition to causing water erosion and pollution, surface runoff in urban areas is a primary cause of urban flooding, which can result in property damage, damp and mold in basements, and street flooding.
Generation
Surface runoff is defined as precipitation (rain, snow, sleet, or hail) that reaches a surface stream without ever passing below the soil surface. It is distinct from direct runoff, which is runoff that reaches surface streams immediately after rainfall or melting snowfall and excludes runoff generated by the melting of snowpack or glaciers.
Snow and glacier melt occur only in areas cold enough for these to form permanently. Typically snowmelt will peak in the spring and glacie
Document 2:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
Document 3:::
In hydrology, run-on refers to surface runoff from an external area that flows on to an area of interest. A portion of run-on can infiltrate once it reaches the area of interest. Run-on is common in arid and semi-arid areas with patchy vegetation cover and short but intense thunderstorms. In these environments, surface runoff is usually generated by a failure of rainfall to infiltrate into the ground quickly enough (this runoff is termed infiltration excess overland flow). This is more likely to occur on bare soil, with low infiltration capacity. As runoff flows downslope, it may run-on to ground with higher infiltration capacity (such as beneath vegetation) and then infiltrate.
Run-on is an important process in the hydrological and ecohydrological behaviour of semi-arid ecosystems. Tiger bush is an example of a vegetation community that develops a patterned structure in response to, in part, the generation of runoff and run-on.
See also
Stormwater
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for rain consisting of water with a ph below 5?
A. carbon rain
B. produce rain
C. acid rain
D. Hot Rain
Answer:
|
|
ai2_arc-754
|
multiple_choice
|
What particle always has a mass of one atomic mass unit (amu) and no charge?
|
[
"a neutron",
"a proton",
"an electron",
"an atom"
] |
A
|
Relavent Documents:
Document 0:::
In particle physics, the electron mass (symbol: ) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. It has a value of about or about , which has an energy-equivalent of about or about
Terminology
The term "rest mass" is sometimes used because in special relativity the mass of an object can be said to increase in a frame of reference that is moving relative to that object (or if the object is moving in a given frame of reference). Most practical measurements are carried out on moving electrons. If the electron is moving at a relativistic velocity, any measurement must use the correct expression for mass. Such correction becomes substantial for electrons accelerated by voltages of over .
For example, the relativistic expression for the total energy, , of an electron moving at speed is
where
is the speed of light;
is the Lorentz factor,
is the "rest mass", or more simply just the "mass" of the electron.
This quantity is frame invariant and velocity independent. However, some texts group the Lorentz factor with the mass factor to define a new quantity called the relativistic mass, .
Determination
Since the electron mass determines a number of observed effects in atomic physics, there are potentially many ways to determine its mass from an experiment, if the values of other physical constants are already considered known.
Historically, the mass of the electron was determined directly from combining two measurements. The mass-to-charge ratio of the electron was first estimated by Arthur Schuster in 1890 by measuring the deflection of "cathode rays" due to a known magnetic field in a cathode ray tube. Seven years later J. J. Thomson showed that cathode rays consist of streams of particles, to be called electrons, and made more precise measurements of their mass-to-charge ratio again using a cathode ray tube.
The second measurement was of the charge of the electron. T
Document 1:::
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 .
In the SI system of units, the value of the elementary charge is exactly defined as = coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one.
In the centimetre–gram–second system of units (CGS), the corresponding quantity is .
Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865.
As a unit
In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron.
In other natural unit systems, the unit of charge is defined as with the result that
where is the fine-structure constant, is the speed of light, is
Document 2:::
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry.
Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge.
Elementary definition
Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition.
A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th
Document 3:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 4:::
Stable massive particles (SMPs) are hypothetical particles that are long-lived and have appreciable mass. The precise definition varies depending on the different experimental or observational searches. SMPs may be defined as being at least as massive as electrons, and not decaying during its passage through a detector. They can be neutral or charged or carry a fractional charge, and interact with matter through gravitational force, strong force, weak force, electromagnetic force or any unknown force.
If new SMPs are ever discovered, several questions related to the origin and constituent of dark matter, and about the unification of four fundamental forces may be answered.
Collider experiments
Heavy, exotic particles interacting with matter and which can be directly detected through collider experiments are termed as stable massive particles or SMPs. More specifically a SMP is defined to be a particle that can pass through a detector without decaying and can undergo electromagnetic or strong interaction with matter. Searches for SMPs have been carried out across a spectrum of collision experiments such as lepton–hadron, hadron–hadron, and electron–positron. Although none of these experiments have detected an SMP, they have put substantial constraints on the nature of SMPs.
ATLAS Experiment
During the proton–proton collisions with center of mass energy equal to 13 TeV at the ATLAS experiment, a search for charged SMPs was carried out. In this case SMPs were defined as particles with mass significantly more than that of standard model particles, sufficient lifetime to reach the ATLAS hadronic calorimeter and with measurable electric charge while it passes through the tracking chambers.
MoEDAL experiment
The MoEDAL experiment search for, among others, highly ionizing SMPs and pseudo-SMPs.
Non-collider experiments
In the case of the non-collider experiments, SMPs are defined as sufficiently long-lived particles which exist either as relics of the big bang sin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What particle always has a mass of one atomic mass unit (amu) and no charge?
A. a neutron
B. a proton
C. an electron
D. an atom
Answer:
|
|
sciq-8636
|
multiple_choice
|
Cycloalkanes are hydrocarbons whose molecules are closed rings rather than straight or branched chains. a cyclic hydrocarbon is a hydrocarbon with a ring of these?
|
[
"hydrogen atoms",
"barium atoms",
"oxygen atoms",
"carbon atoms"
] |
D
|
Relavent Documents:
Document 0:::
A cyclic compound (or ring compound) is a term for a compound in the field of chemistry in which one or more series of atoms in the compound is connected to form a ring. Rings may vary in size from three to many atoms, and include examples where all the atoms are carbon (i.e., are carbocycles), none of the atoms are carbon (inorganic cyclic compounds), or where both carbon and non-carbon atoms are present (heterocyclic compounds with rings containing both carbon and non-carbon). Depending on the ring size, the bond order of the individual links between ring atoms, and their arrangements within the rings, carbocyclic and heterocyclic compounds may be aromatic or non-aromatic; in the latter case, they may vary from being fully saturated to having varying numbers of multiple bonds between the ring atoms. Because of the tremendous diversity allowed, in combination, by the valences of common atoms and their ability to form rings, the number of possible cyclic structures, even of small size (e.g., < 17 total atoms) numbers in the many billions.
Adding to their complexity and number, closing of atoms into rings may lock particular atoms with distinct substitution (by functional groups) such that stereochemistry and chirality of the compound results, including some manifestations that are unique to rings (e.g., configurational isomers). As well, depending on ring size, the three-dimensional shapes of particular cyclic structures – typically rings of five atoms and larger – can vary and interconvert such that conformational isomerism is displayed. Indeed, the development of this important chemical concept arose historically in reference to cyclic compounds. Finally, cyclic compounds, because of the unique shapes, reactivities, properties, and bioactivities that they engender, are the majority of all molecules involved in the biochemistry, structure, and function of living organisms, and in man-made molecules such as drugs, pesticides, etc.
Structure and classification
A cy
Document 1:::
Cyclopropyl cyanide is an organic compound consisting of a nitrile group as a substituent on a cyclopropane ring. It is the smallest cyclic compound containing a nitrile.
Structure
The structure of cyclopropyl cyanide has been determined by a variety of experiments, including microwave spectroscopy, rotational spectroscopy and photodissociation. In 1958, cyclopropyl cyanide was first studied for its rotational spectra, by Friend and Dailey. An additional experiment involving cyclopropyl cyanide was the determination of the molecular dipole moment through spectroscopy experiments, by Carvalho in 1967.
Production
Cyclopropyl cyanide is prepared by the reaction of 4-chlorobutyronitrile with a strong base, such as sodium amide in liquid ammonia.
Reactions
Cyclopropyl cyanide, when heated to 660-760K and under pressure of 2-89torr, becomes cis and trans crotonitrile and allyl cyanide molecules, with some presence of methacrylonitrile. This is an isomerization reaction that is homogeneous with rate of first order. The reaction result is due to the biradical mechanism, which involves the formation of carbon radicals as the three carbon ring opens up. The radicals then react to yield carbon=carbon double bonds.
Document 2:::
In organic chemistry, a Platonic hydrocarbon is a hydrocarbon (molecule) whose structure matches one of the five Platonic solids, with carbon atoms replacing its vertices, carbon–carbon bonds replacing its edges, and hydrogen atoms as needed.
Not all Platonic solids have molecular hydrocarbon counterparts; those that do are the tetrahedron (tetrahedrane), the cube (cubane), and the dodecahedron (dodecahedrane).
Tetrahedrane
Tetrahedrane (C4H4) is a hypothetical compound. It has not yet been synthesized without substituents, but it is predicted to be kinetically stable in spite of its angle strain. Some stable derivatives, including tetra(tert-butyl)tetrahedrane (a hydrocarbon) and tetra(trimethylsilyl)tetrahedrane, have been produced.
Cubane
Cubane (C8H8) has been synthesized. Although it has high angle strain, cubane is kinetically stable, due to a lack of readily available decomposition paths.
Octahedrane
Angle strain would make an octahedron highly unstable due to inverted tetrahedral geometry at each vertex. There would also be no hydrogen atoms because four edges meet at each corner; thus, the hypothetical octahedrane molecule would be an allotrope of elemental carbon, C6, and not a hydrocarbon. The existence of octahedrane cannot be ruled out completely, although calculations have shown that it is unlikely.
Dodecahedrane
Dodecahedrane (C20H20) was first synthesized in 1982, and has minimal angle strain; the tetrahedral angle is 109.5° and the dodecahedral angle is 108°, only a slight discrepancy.
Icosahedrane
The tetravalency (4-connectedness) of carbon excludes an icosahedron because 5 edges meet at each vertex. True pentavalent carbon is unlikely; methanium, nominally , usually exists as . The hypothetical icosahedral lacks hydrogen so it is not a hydrocarbon; it is also an ion.
Both icosahedral and octahedral structures have been observed in boron compounds such as the dodecaborate ion and some of the carbon-containing carboranes.
Other polyhedr
Document 3:::
A bicyclic molecule () is a molecule that features two joined rings. Bicyclic structures occur widely, for example in many biologically important molecules like α-thujene and camphor. A bicyclic compound can be carbocyclic (all of the ring atoms are carbons), or heterocyclic (the rings' atoms consist of at least two elements), like DABCO. Moreover, the two rings can both be aliphatic (e.g. decalin and norbornane), or can be aromatic (e.g. naphthalene), or a combination of aliphatic and aromatic (e.g. tetralin).
Three modes of ring junction are possible for a bicyclic compound:
In spiro compounds, the two rings share only one single atom, the spiro atom, which is usually a quaternary carbon. An example of a spirocyclic compound is the photochromic switch spiropyran.
In fused/condensed bicyclic compounds, two rings share two adjacent atoms. In other words, the rings share one covalent bond, i.e. the bridgehead atoms are directly connected (e.g. α-thujene and decalin).
In bridged bicyclic compounds, the two rings share three or more atoms, separating the two bridgehead atoms by a bridge containing at least one atom. For example, norbornane, also known as bicyclo[2.2.1]heptane, can be viewed as a pair of cyclopentane rings each sharing three of their five carbon atoms. Camphor is a more elaborate example.
Nomenclature
Bicyclic molecules are described by IUPAC nomenclature. The root of the compound name depends on the total number of atoms in all rings together, possibly followed by a suffix denoting the functional group with the highest priority. Numbering of the carbon chain always begins at one bridgehead atom (where the rings meet) and follows the carbon chain along the longest path, to the next bridgehead atom. Then numbering is continued along the second longest path and so on. Fused and bridged bicyclic compounds get the prefix bicyclo, whereas spirocyclic compounds get the prefix spiro. In between the prefix and the suffix, a pair of brackets with numerals
Document 4:::
In chemistry, an open-chain compound (also spelled as open chain compound) or acyclic compound (Greek prefix "α", without and "κύκλος", cycle) is a compound with a linear structure, rather than a cyclic one.
An open-chain compound having no side groups is called a straight-chain compound (also spelled as straight chain compound). Many of the simple molecules of organic chemistry, such as the alkanes and alkenes, have both linear and ring isomers, that is, both acyclic and cyclic. For those with 4 or more carbons, the linear forms can have straight-chain or branched-chain isomers. The lowercase prefix n- denotes the straight-chain isomer; for example, n-butane is straight-chain butane, whereas i-butane is isobutane. Cycloalkanes are isomers of alkenes, not of alkanes, because the ring's closure involves a C-C bond. Having no rings (aromatic or otherwise), all open-chain compounds are aliphatic.
Typically in biochemistry, some isomers are more prevalent than others. For example, in living organisms, the open-chain isomer of glucose usually exists only transiently, in small amounts; D-glucose is the usual isomer; and L-glucose is rare.
Straight-chain molecules are often not literally straight, in the sense that their bond angles are often not 180°, but the name reflects that they are schematically straight. For example, the straight-chain alkanes are wavy or "puckered", as the models below show.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Cycloalkanes are hydrocarbons whose molecules are closed rings rather than straight or branched chains. a cyclic hydrocarbon is a hydrocarbon with a ring of these?
A. hydrogen atoms
B. barium atoms
C. oxygen atoms
D. carbon atoms
Answer:
|
|
ai2_arc-341
|
multiple_choice
|
In the 1500s, Nicolaus Copernicus proposed a new theory on the heliocentric structure of the solar system. Which of these statements best describes this new theory?
|
[
"Earth is at the center of the solar system.",
"There are eight planets in the solar system.",
"The Sun is at the center of the solar system.",
"Moons have circular orbits in the solar system."
] |
C
|
Relavent Documents:
Document 0:::
The Copernican Revolution is a 1957 book by the philosopher Thomas Kuhn, in which the author provides an analysis of the Copernican Revolution, documenting the pre-Ptolemaic understanding through the Ptolemaic system and its variants until the eventual acceptance of the Keplerian system.
Kuhn argues that the Ptolemaic system provided broader appeal than a simple astronomical system but also became intertwined in broader philosophical and theological beliefs. Kuhn argues that this broader appeal made it more difficult for other systems to be proposed.
Summary
At the end of the book, Kuhn summarizes the achievements of Copernicus and Newton, while comparing the incompatibility of Newtonian physics with Aristotelian concepts that preceded the then new physics. Kuhn also noted that discoveries, such as that produced by Newton, were not in agreement with the prevailing worldview during his lifetime.
Document 1:::
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star system. Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disks. The Sun together with the planetary system revolving around it, including Earth, forms the Solar System. The term exoplanetary system is sometimes used in reference to other planetary systems.
Debris disks are also known to be common, though other objects are more difficult to observe.
Of particular interest to astrobiology is the habitable zone of planetary systems where planets could have surface liquid water, and thus the capacity to support Earth-like life.
History
Heliocentrism
Historically, heliocentrism (the doctrine that the Sun is at the centre of the universe) was opposed to geocentrism (placing Earth at the centre of the universe).
The notion of a heliocentric Solar System with the Sun at its centre is possibly first suggested in the Vedic literature of ancient India, which often refer to the Sun as the "centre of spheres". Some interpret Aryabhatta's writings in Āryabhaṭīya as implicitly heliocentric.
The idea was first proposed in Western philosophy and Greek astronomy as early as the 3rd century BC by Aristarchus of Samos, but received no support from most other ancient astronomers.
Discovery of the Solar System
De revolutionibus orbium coelestium by Nicolaus Copernicus, published in 1543, presented the first mathematically predictive heliocentric model of a planetary system. 17th-century successors Galileo Galilei, Johannes Kepler, and Sir Isaac Newton developed an understanding of physics which led to the gradual acceptance of the idea that the Earth moves around the Sun and that the planets are governed by the same physical laws that governed Earth.
Speculation on extrasolar pla
Document 2:::
Astronomia nova (English: New Astronomy, full title in original Latin: ) is a book, published in 1609, that contains the results of the astronomer Johannes Kepler's ten-year-long investigation of the motion of Mars.
One of the most significant books in the history of astronomy, the Astronomia nova provided strong arguments for heliocentrism and contributed valuable insight into the movement of the planets. This included the first mention of the planets' elliptical paths and the change of their movement to the movement of free floating bodies as opposed to objects on rotating spheres. It is recognized as one of the most important works of the Scientific Revolution.
Background
Prior to Kepler, Nicolaus Copernicus proposed in 1543 that the Earth and other planets orbit the Sun. The Copernican model of the Solar System was regarded as a device to explain the observed positions of the planets rather than a physical description.
Kepler sought for and proposed physical causes for planetary motion. His work is primarily based on the research of his mentor, Tycho Brahe. The two, though close in their work, had a tumultuous relationship. Regardless, in 1601 on his deathbed, Brahe asked Kepler to make sure that he did not "die in vain," and to continue the development of his model of the Solar System. Kepler would instead write the Astronomia nova, in which he rejects the Tychonic system, as well as the Ptolemaic system and the Copernican system. Some scholars have speculated that Kepler's dislike for Brahe may have had a hand in his rejection of the Tychonic system and formation of a new one.
By 1602, Kepler set to work on determining the orbit pattern of Mars, keeping David Fabricius informed of his progress. He suggested the possibility of an oval orbit to Fabricius by early 1604, though was not believed. Later in the year, Kepler wrote back with his discovery of Mars's elliptical orbit. The manuscript for Astronomia nova was completed by September 1607, and was in pr
Document 3:::
The Copernican Question: Prognostication, Skepticism, and Celestial Order is a 704-page book written by Robert S. Westman and published by University of California Press (Berkeley, Los Angeles, London) in 2011 and in 2020 (paperback). The book is a broad historical overview of Europe's astronomical and astrological culture leading to Copernicus’s De revolutionibus and follows the scholarly debates that took place roughly over three generations after Copernicus.
Summary
In 1543, Nicolaus Copernicus publicly defended his hypothesis that the earth is a planet and the sun a body resting near the center of a finite universe. This view challenged a long-held, widespread consensus about the order of the planets. But why did Copernicus make this bold proposal? And why did it matter? The Copernican Question revisits this pivotal moment in the history of science and puts political and cultural developments at the center rather than the periphery of the story. When Copernicus first hit on his theory around 1510, European society at all social levels was consumed with chronic warfare, the syphilis pandemic and recurrence of the bubonic plague, and, soon thereafter, Martin Luther’s break with the Catholic church. Apocalyptic prophecies about the imminent end of the world were rife; the relatively new technology of print was churning out reams of alarming astrological prognostications even as astrology itself came under serious attack in July 1496 from the Renaissance Florentine polymath Giovanni Pico della Mirandola (1463-1494). Copernicus knew Pico’s work, possibly as early as the year of its publication in Bologna, the city in which he lived with the astrological prognosticator and astronomer, Domenico Maria di Novara (1454-1504). Against Pico’s multi-pronged critique, Copernicus sought to protect the credibility of astrology by reforming the astronomical foundations on which astrology rested. But, his new hypothesis came at the cost of introducing new uncertainties and enge
Document 4:::
Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model. Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws. The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy, astrometry, astrochemistry, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe, but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
Integrating astronomy and physics
"Contrary to the belief generally held by laboratory physicists, astrono
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In the 1500s, Nicolaus Copernicus proposed a new theory on the heliocentric structure of the solar system. Which of these statements best describes this new theory?
A. Earth is at the center of the solar system.
B. There are eight planets in the solar system.
C. The Sun is at the center of the solar system.
D. Moons have circular orbits in the solar system.
Answer:
|
|
scienceQA-9363
|
multiple_choice
|
What do these two changes have in common?
breaking a ceramic plate
beating an egg
|
[
"Both are caused by cooling.",
"Both are caused by heating.",
"Both are chemical changes.",
"Both are only physical changes."
] |
D
|
Step 1: Think about each change.
Breaking a ceramic plate is a physical change. The plate gets broken into pieces. But each piece is still made of the same type of matter.
Beating an egg is a physical change. Beating an egg mixes together the egg white, egg yolk, and some air. But mixing them together does not form a different type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
breaking a ceramic plate
beating an egg
A. Both are caused by cooling.
B. Both are caused by heating.
C. Both are chemical changes.
D. Both are only physical changes.
Answer:
|
sciq-2577
|
multiple_choice
|
What is it called when two alleles are both expressed in the heterozygous individual?
|
[
"codominance",
"shared dominance",
"low dominance",
"weak dominance"
] |
A
|
Relavent Documents:
Document 0:::
In genetics, dominance is the phenomenon of one variant (allele) of a gene on a chromosome masking or overriding the effect of a different variant of the same gene on the other copy of the chromosome. The first variant is termed dominant and the second is called recessive. This state of having two different variants of the same gene on each chromosome is originally caused by a mutation in one of the genes, either new (de novo) or inherited. The terms autosomal dominant or autosomal recessive are used to describe gene variants on non-sex chromosomes (autosomes) and their associated traits, while those on sex chromosomes (allosomes) are termed X-linked dominant, X-linked recessive or Y-linked; these have an inheritance and presentation pattern that depends on the sex of both the parent and the child (see Sex linkage). Since there is only one copy of the Y chromosome, Y-linked traits cannot be dominant or recessive. Additionally, there are other forms of dominance, such as incomplete dominance, in which a gene variant has a partial effect compared to when it is present on both chromosomes, and co-dominance, in which different variants on each chromosome both show their associated traits.
Dominance is a key concept in Mendelian inheritance and classical genetics. Letters and Punnett squares are used to demonstrate the principles of dominance in teaching, and the use of upper-case letters for dominant alleles and lower-case letters for recessive alleles is a widely followed convention. A classic example of dominance is the inheritance of seed shape in peas. Peas may be round, associated with allele R, or wrinkled, associated with allele r. In this case, three combinations of alleles (genotypes) are possible: RR, Rr, and rr. The RR (homozygous) individuals have round peas, and the rr (homozygous) individuals have wrinkled peas. In Rr (heterozygous) individuals, the R allele masks the presence of the r allele, so these individuals also have round peas. Thus, allele R is d
Document 1:::
Zygosity (the noun, zygote, is from the Greek "yoked," from "yoke") () is the degree to which both copies of a chromosome or gene have the same genetic sequence. In other words, it is the degree of similarity of the alleles in an organism.
Most eukaryotes have two matching sets of chromosomes; that is, they are diploid. Diploid organisms have the same loci on each of their two sets of homologous chromosomes except that the sequences at these loci may differ between the two chromosomes in a matching pair and that a few chromosomes may be mismatched as part of a chromosomal sex-determination system. If both alleles of a diploid organism are the same, the organism is homozygous at that locus. If they are different, the organism is heterozygous at that locus. If one allele is missing, it is hemizygous, and, if both alleles are missing, it is nullizygous.
The DNA sequence of a gene often varies from one individual to another. These gene variants are called alleles. While some genes have only one allele because there is low variation, others have only one allele because deviation from that allele can be harmful or fatal. But most genes have two or more alleles. The frequency of different alleles varies throughout the population. Some genes may have alleles with equal distributions. Often, the different variations in the genes do not affect the normal functioning of the organism at all. For some genes, one allele may be common, and another allele may be rare. Sometimes, one allele is a disease-causing variation while another allele is healthy.
In diploid organisms, one allele is inherited from the male parent and one from the female parent. Zygosity is a description of whether those two alleles have identical or different DNA sequences. In some cases the term "zygosity" is used in the context of a single chromosome.
Types
The words homozygous, heterozygous, and hemizygous are used to describe the genotype of a diploid organism at a single locus on the DNA. Homozygou
Document 2:::
Alleles
Document 3:::
Alleles have identity by type (IBT) when they have the same phenotypic effect or, if applied to a variation in the composition of DNA such as a single nucleotide polymorphism, when they have the same DNA sequence.
Alleles that are identical by type fall into two groups; those that are identical by descent (IBD) because they arose from the same allele in an earlier generation; and those that are non-identical by descent (NIBD) because they arose from separate mutations. NIBD can also be identical by state (IBS) though, if they share the same mutational expression but not through a recent common ancestor. Parent-offspring pairs share 50% of their genes IBD, and monozygotic twins share 100% IBD.
See also
Population genetics
External links
https://web.archive.org/web/20060309055031/http://darwin.eeb.uconn.edu/eeb348/lecture-notes/identity.pdf
http://zwets.com/pedkin/thompson.pdf
Classical genetics
Document 4:::
Major gene is a gene with pronounced phenotype expression, in contrast to a modifier gene. Major gene characterizes common expression of oligogenic series, i.e. a small number of genes that determine the same trait.
Major genes control the discontinuous or qualitative characters in contrast of minor genes or polygenes with individually small effects. Major genes segregate and may be easily subject to mendelian analysis. The gene categorization into major and minor determinants is more or less arbitrary. Both of the two types are in all probability only end points in a more or less continuous series of gene action and gene interactions.
The term major gene was introduced into the science of inheritance by Keneth Mather (1941).
See also
Gene interaction
Minor gene
Gene
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is it called when two alleles are both expressed in the heterozygous individual?
A. codominance
B. shared dominance
C. low dominance
D. weak dominance
Answer:
|
|
sciq-997
|
multiple_choice
|
What does magma that cools underground form?
|
[
"cracks",
"intrusions",
"plates",
"anomalies"
] |
B
|
Relavent Documents:
Document 0:::
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle.
Transition to igneous rock
When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks.
Secondary changes
Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
Document 1:::
The thermal history of Earth involves the study of the cooling history of Earth's interior. It is a sub-field of geophysics. (Thermal histories are also computed for the internal cooling of other planetary and stellar bodies.) The study of the thermal evolution of Earth's interior is uncertain and controversial in all aspects, from the interpretation of petrologic observations used to infer the temperature of the interior, to the fluid dynamics responsible for heat loss, to material properties that determine the efficiency of heat transport.
Overview
Observations that can be used to infer the temperature of Earth's interior range from the oldest rocks on Earth to modern seismic images of the inner core size. Ancient volcanic rocks can be associated with a depth and temperature of melting through their geochemical composition. Using this technique and some geological inferences about the conditions under which the rock is preserved, the temperature of the mantle can be inferred. The mantle itself is fully convective, so that the temperature in the mantle is basically constant with depth outside the top and bottom thermal boundary layers. This is not quite true because the temperature in any convective body under pressure must increase along an adiabat, but the adiabatic temperature gradient is usually much smaller than the temperature jumps at the boundaries. Therefore, the mantle is usually associated with a single or potential temperature that refers to the mid-mantle temperature extrapolated along the adiabat to the surface. The potential temperature of the mantle is estimated to be about 1350 C today. There is an analogous potential temperature of the core but since there are no samples from the core its present-day temperature relies on extrapolating the temperature along an adiabat from the inner core boundary, where the iron solidus is somewhat constrained.
Thermodynamics
The simplest mathematical formulation of the thermal history of Earth's interior i
Document 2:::
Tectonophysics, a branch of geophysics, is the study of the physical processes that underlie tectonic deformation. This includes measurement or calculation of the stress- and strain fields on Earth’s surface and the rheologies of the crust, mantle, lithosphere and asthenosphere.
Overview
Tectonophysics is concerned with movements in the Earth's crust and deformations over scales from meters to thousands of kilometers. These govern processes on local and regional scales and at structural boundaries, such as the destruction of continental crust (e.g. gravitational instability) and oceanic crust (e.g. subduction), convection in the Earth's mantle (availability of melts), the course of continental drift, and second-order effects of plate tectonics such as thermal contraction of the lithosphere. This involves the measurement of a hierarchy of strains in rocks and plates as well as deformation rates; the study of laboratory analogues of natural systems; and the construction of models for the history of deformation.
History
Tectonophysics was adopted as the name of a new section of AGU on April 19, 1940, at AGU's 21st Annual Meeting. According to the AGU website (https://tectonophysics.agu.org/agu-100/section-history/), using the words from Norman Bowen, the main goal of the tectonophysics section was to “designate this new borderline field between geophysics, physics and geology … for the solution of problems of tectonics.” Consequently, the claim below that the term was defined in 1954 by Gzolvskii is clearly incorrect. Since 1940 members of AGU had been presenting papers at AGU meetings, the contents of which defined the meaning of the field.
Tectonophysics was defined as a field in 1954 when Mikhail Vladimirovich Gzovskii published three papers in the journal Izvestiya Akad. Nauk SSSR, Sireya Geofizicheskaya: "On the tasks and content of tectonophysics", "Tectonic stress fields", and "Modeling of tectonic stress fields". He defined the main goals of tectonophysica
Document 3:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 4:::
Earth's crustal evolution involves the formation, destruction and renewal of the rocky outer shell at that planet's surface.
The variation in composition within the Earth's crust is much greater than that of other terrestrial planets. Mars, Venus, Mercury and other planetary bodies have relatively quasi-uniform crusts unlike that of the Earth which contains both oceanic and continental plates. This unique property reflects the complex series of crustal processes that have taken place throughout the planet's history, including the ongoing process of plate tectonics.
The proposed mechanisms regarding Earth's crustal evolution take a theory-orientated approach. Fragmentary geologic evidence and observations provide the basis for hypothetical solutions to problems relating to the early Earth system. Therefore, a combination of these theories creates both a framework of current understanding and also a platform for future study.
Early crust
Mechanisms of early crust formation
The early Earth was entirely molten. This was due to high temperatures created and maintained by the following processes:
Compression of the early atmosphere
Rapid axial rotation
Regular impacts with neighbouring planetesimals.
The mantle remained hotter than modern day temperatures throughout the Archean. Over time the Earth began to cool as planetary accretion slowed and heat stored within the magma ocean was lost to space through radiation.
A theory for the initiation of magma solidification states that once cool enough, the cooler base of the magma ocean would begin to crystallise first. This is because pressure of 25 GPa at the surface cause the solidus to lower. The formation of a thin 'chill-crust' at the extreme surface would provide thermal insulation to the shallow sub surface, keeping it warm enough to maintain the mechanism of crystallisation from the deep magma ocean.
The composition of the crystals produced during the crystallisation of the magma ocean varied with depth. Ex
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does magma that cools underground form?
A. cracks
B. intrusions
C. plates
D. anomalies
Answer:
|
|
sciq-7206
|
multiple_choice
|
The smallest cyclic ether is called what?
|
[
"quark",
"peroxidase",
"aldehyde",
"epoxide"
] |
D
|
Relavent Documents:
Document 0:::
Classification
Oxidoreductases are classified as EC 1 in the EC number classification of enzymes. Oxidoreductases can be further classified into 21 subclasses:
EC 1.1 includes oxidoreductases that act on the CH-OH group of donors (alcohol oxidoreductases such as methanol dehydrogenase)
EC 1.2 includes oxidoreductases that act on the aldehyde or oxo group of donors
EC 1.3 includes oxidoreductases that act on the CH-CH group of donors (CH-CH oxidore
Document 1:::
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma
Document 2:::
Types
As indicated in the following Biochemistry section, there are 4 types of chemically distinct eoxins that are made serially from the 15-lipoxygenase metabolite of arachidonic
Document 3:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 4:::
Perylene or perilene is a polycyclic aromatic hydrocarbon with the chemical formula C20H12, occurring as a brown solid. It or its derivatives may be carcinogenic, and it is considered to be a hazardous pollutant. In cell membrane cytochemistry, perylene is used as a fluorescent lipid probe. It is the parent compound of a class of rylene dyes.
Reactions
Like other polycyclic aromatic compounds, perylene is reduced by alkali metals to give a deeply colored radical anion and a dianion. The diglyme solvates of these salts have been characterized by X-ray crystallography.
Emission
Perylene displays blue fluorescence. It is used as a blue-emitting dopant material in OLEDs, either pure or substituted. Perylene can be also used as an organic photoconductor. It has an absorption maximum at 434 nm, and as with all polycyclic aromatic compounds, low water solubility (1.2 x 10−5 mmol/L). Perylene has a molar absorptivity of 38,500 M−1cm−1 at 435.7 nm.
Structure
The perylene molecule consists of two naphthalene molecules connected by a carbon-carbon bond at the 1 and 8 positions on both molecules. All of the carbon atoms in perylene are sp2 hybridized. The structure of perylene has been extensively studied by X-ray crystallography.
Biology
Naturally occurring perylene quinones have been identified in lichens Laurera sanguinaria Malme and Graphis haematites Fée.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The smallest cyclic ether is called what?
A. quark
B. peroxidase
C. aldehyde
D. epoxide
Answer:
|
|
sciq-3205
|
multiple_choice
|
The nucleus is comprised primarily of?
|
[
"matter",
"faith",
"energy",
"empty space"
] |
D
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
The nucleoplasm, also known as karyoplasm, is the type of protoplasm that makes up the cell nucleus, the most prominent organelle of the eukaryotic cell. It is enclosed by the nuclear envelope, also known as the nuclear membrane. The nucleoplasm resembles the cytoplasm of a eukaryotic cell in that it is a gel-like substance found within a membrane, although the nucleoplasm only fills out the space in the nucleus and has its own unique functions. The nucleoplasm suspends structures within the nucleus that are not membrane-bound and is responsible for maintaining the shape of the nucleus. The structures suspended in the nucleoplasm include chromosomes, various proteins, nuclear bodies, the nucleolus, nucleoporins, nucleotides, and nuclear speckles.
The soluble, liquid portion of the nucleoplasm is called the karyolymph nucleosol, or nuclear hyaloplasm.
History
The existence of the nucleus, including the nucleoplasm, was first documented as early as 1682 by the Dutch microscopist Leeuwenhoek and was later described and drawn by Franz Bauer. However, the cell nucleus was not named and described in detail until Robert Brown's presentation to the Linnean Society in 1831.
The nucleoplasm, while described by Bauer and Brown, was not specifically isolated as a separate entity until its naming in 1882 by Polish-German scientist Eduard Strasburger, one of the most famous botanists of the 19th century, and the first person to discover mitosis in plants.
Role
Many important cell functions take place in the nucleus, more specifically in the nucleoplasm. The main function of the nucleoplasm is to provide the proper environment for essential processes that take place in the nucleus, serving as the suspension substance for all organelles inside the nucleus, and storing the structures that are used in these processes. 34% of proteins encoded in the human genome are ones that localize to the nucleoplasm. These proteins take part in RNA transcription and gene regulation in the n
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The nucleus is comprised primarily of?
A. matter
B. faith
C. energy
D. empty space
Answer:
|
|
sciq-8637
|
multiple_choice
|
Speciation, convergent evolution, and coevolution are types of what process?
|
[
"systemic evolution",
"macroevolution",
"devolution",
"microevolution"
] |
B
|
Relavent Documents:
Document 0:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 1:::
Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment.
Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy.
Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex.
Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances.
For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term.
Document 2:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 3:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 4:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Speciation, convergent evolution, and coevolution are types of what process?
A. systemic evolution
B. macroevolution
C. devolution
D. microevolution
Answer:
|
|
ai2_arc-192
|
multiple_choice
|
A goat gets energy from the grass it eats. Where does the grass get its energy?
|
[
"soil",
"sunlight",
"water",
"air"
] |
B
|
Relavent Documents:
Document 0:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
A scholar is a person who is a researcher or has expertise in an academic discipline. A scholar can also be an academic, who works as a professor, teacher, or researcher at a university. An academic usually holds an advanced degree or a terminal degree, such as a master's degree or a doctorate (PhD). Independent scholars and public intellectuals work outside of the academy yet may publish in academic journals and participate in scholarly public discussion.
Definitions
In contemporary English usage, the term scholar sometimes is equivalent to the term academic, and describes a university-educated individual who has achieved intellectual mastery of an academic discipline, as instructor and as researcher. Moreover, before the establishment of universities, the term scholar identified and described an intellectual person whose primary occupation was professional research. In 1847, minister Emanuel Vogel Gerhart spoke of the role of the scholar in society:
Gerhart argued that a scholar can not be focused on a single discipline, contending that knowledge of multiple disciplines is necessary to put each into context and to inform the development of each:
A 2011 examination outlined the following attributes commonly accorded to scholars as "described by many writers, with some slight variations in the definition":
Scholars may rely on the scholarly method or scholarship, a body of principles and practices used by scholars to make their claims about the world as valid and trustworthy as possible, and to make them known to the scholarly public. It is the methods that systemically advance the teaching, research, and practice of a given scholarly or academic field of study through rigorous inquiry. Scholarship is creative, can be documented, can be replicated or elaborated, and can be and is peer-reviewed through various methods.
Role in society
Scholars have generally been upheld as creditable figures of high social standing, who are engaged in work important to society.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A goat gets energy from the grass it eats. Where does the grass get its energy?
A. soil
B. sunlight
C. water
D. air
Answer:
|
|
sciq-1352
|
multiple_choice
|
What serious sti can damage the heart, brain and other organs or even cause death, if untreated?
|
[
"chlamydia",
"cirrhosis",
"herpes",
"syphilis"
] |
D
|
Relavent Documents:
Document 0:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
Document 1:::
The visual analogue scale (VAS) is a psychometric response scale that can be used in questionnaires. It is a measurement instrument for subjective characteristics or attitudes that cannot be directly measured. When responding to a VAS item, respondents specify their level of agreement to a statement by indicating a position along a continuous line between two end points.
Comparison to other scales
This continuous (or "analogue") aspect of the scale differentiates it from discrete scales such as the Likert scale. There is evidence showing that visual analogue scales have superior metrical characteristics than discrete scales, thus a wider range of statistical methods can be applied to the measurements.
The VAS can be compared to other linear scales such as the Likert scale or Borg scale. The sensitivity and reproducibility of the results are broadly very similar, although the VAS may outperform the other scales in some cases. These advantages extend to measurement instruments made up from combinations of visual analogue scales, such as semantic differentials.
Uses
Recent advances in methodologies for Internet-based research include the development and evaluation of visual analogue scales for use in Internet-based questionnaires. One electronic version of the VAS that employs a 10 cm scale and various customizations is available on the Apple Store for use in research and workplace settings.
VAS is the most common pain scale for quantification of endometriosis-related pain and skin graft donor site-related pain. A review came to the conclusion that VAS and numerical rating scale (NRS) were the best adapted pain scales for pain measurement in endometriosis. For research purposes, and for more detailed pain measurement in clinical practice, the review suggested use of VAS or NRS for each type of typical pain related to endometriosis (dysmenorrhea, deep dyspareunia and non-menstrual chronic pelvic pain), combined with the clinical global impression (CGI) and a qualit
Document 2:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 3:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 4:::
Medical Science Educator is a peer-reviewed journal that focuses on teaching the sciences that are fundamental to modern medicine and health. Coverage includes basic science education, clinical teaching and the incorporation of modern educational technologies. MSE offers all who teach in healthcare the most current information to succeed in their task by publishing scholarly activities, opinions, and resources in medical science education. MSE provides the readership a better understanding of teaching and learning techniques in order to advance medical science education. It is the official publication of the International Association of Medical Science Educators (IAMSE).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What serious sti can damage the heart, brain and other organs or even cause death, if untreated?
A. chlamydia
B. cirrhosis
C. herpes
D. syphilis
Answer:
|
|
sciq-429
|
multiple_choice
|
The skull is a part of a vertebrate endoskeleton that encloses and protects what organ?
|
[
"brain",
"nervous system",
"heart",
"lung"
] |
A
|
Relavent Documents:
Document 0:::
The skull is a bone protective cavity for the brain. The skull is composed of four types of bone i.e., cranial bones, facial bones, ear ossicles and hyoid bone. However two parts are more prominent: the cranium (: craniums or crania) and the mandible. In humans, these two parts are the neurocranium (braincase) and the viscerocranium (facial skeleton) that includes the mandible as its largest bone. The skull forms the anterior-most portion of the skeleton and is a product of cephalisation—housing the brain, and several sensory structures such as the eyes, ears, nose, and mouth. In humans these sensory structures are part of the facial skeleton.
Functions of the skull include protection of the brain, fixing the distance between the eyes to allow stereoscopic vision, and fixing the position of the ears to enable sound localisation of the direction and distance of sounds. In some animals, such as horned ungulates (mammals with hooves), the skull also has a defensive function by providing the mount (on the frontal bone) for the horns.
The English word skull is probably derived from Old Norse , while the Latin word comes from the Greek root (). The human skull fully develops two years after birth.The junctions of the skull bones are joined by structures called sutures.
The skull is made up of a number of fused flat bones, and contains many foramina, fossae, processes, and several cavities or sinuses. In zoology there are openings in the skull called fenestrae.
Structure
Humans
The human skull is the bone structure that forms the head in the human skeleton. It supports the structures of the face and forms a cavity for the brain. Like the skulls of other vertebrates, it protects the brain from injury.
The skull consists of three parts, of different embryological origin—the neurocranium, the sutures, and the facial skeleton (also called the membraneous viscerocranium). The neurocranium (or braincase) forms the protective cranial cavity that surrounds and houses the
Document 1:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 2:::
The human body is the structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. They ensure homeostasis and the viability of the human body.
It comprises a head, hair, neck, torso (which includes the thorax and abdomen), arms and hands, legs and feet.
The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar and oxygen in the blood.
The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work.
Composition
The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body.
The adult male body is about 60% water for a total water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates.
Cells
The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 3037trillion cells in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The body is also host to about the same number of non-human cells as well as multicellular organisms which reside in the gastrointestinal tract and on the skin. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen,
Document 3:::
There is much to be discovered about the evolution of the brain and the principles that govern it. While much has been discovered, not everything currently known is well understood. The evolution of the brain has appeared to exhibit diverging adaptations within taxonomic classes such as Mammalia and more vastly diverse adaptations across other taxonomic classes.
Brain to body size scales allometrically. This means as body size changes, so do other physiological, anatomical, and biochemical constructs connecting the brain to the body. Small bodied mammals have relatively large brains compared to their bodies whereas large mammals (such as whales) have a smaller brain to body ratios. If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized compared to all other primates. This means that human brains have exhibited a larger evolutionary increase in its complexity relative to its size. Some of these evolutionary changes have been found to be linked to multiple genetic factors, such as proteins and other organelles.
Early history of brain development
One approach to understanding overall brain evolution is to use a paleoarchaeological timeline to trace the necessity for ever increasing complexity in structures that allow for chemical and electrical signaling. Because brains and other soft tissues do not fossilize as readily as mineralized tissues, scientists often look to other structures as evidence in the fossil record to get an understanding of brain evolution. This, however, leads to a dilemma as the emergence of organisms with more complex nervous systems with protective bone or other protective tissues that can then
Document 4:::
The following diagram is provided as an overview of and topical guide to the human nervous system:
Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system.
Evolution of the human nervous system
Evolution of nervous systems
Evolution of human intelligence
Evolution of the human brain
Paleoneurology
Some branches of science that study the human nervous system
Neuroscience
Neurology
Paleoneurology
Central nervous system
The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord.
Spinal cord
Brain
Brain – center of the nervous system.
Outline of the human brain
List of regions of the human brain
Principal regions of the vertebrate brain:
Peripheral nervous system
Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS.
Sensory system
A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception.
List of sensory systems
Sensory neuron
Perception
Visual system
Auditory system
Somatosensory system
Vestibular system
Olfactory system
Taste
Pain
Components of the nervous system
Neuron
I
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The skull is a part of a vertebrate endoskeleton that encloses and protects what organ?
A. brain
B. nervous system
C. heart
D. lung
Answer:
|
|
sciq-2666
|
multiple_choice
|
What substance do the leaves of plants take in from the environment?
|
[
"hydrogen",
"acid rain",
"carbon dioxide",
"oxygen"
] |
C
|
Relavent Documents:
Document 0:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
Document 1:::
Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi.
Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida.
There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so.
Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology.
Definition
Taxonomic history
All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d
Document 2:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
Document 3:::
Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995).
Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994).
History of the study of plant tolerance
Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th
Document 4:::
Phytotechnology (; ) implements solutions to scientific and engineering problems in the form of plants. It is distinct from ecotechnology and biotechnology as these fields encompass the use and study of ecosystems and living beings, respectively. Current study of this field has mostly been directed into contaminate removal (phytoremediation), storage (phytosequestration) and accumulation (see hyperaccumulators). Plant-based technologies have become alternatives to traditional cleanup procedures because of their low capital costs, high success rates, low maintenance requirements, end-use value, and aesthetic nature.
Overview
Phytotechnology is the application of plants to engineering and science problems. Phytotechnology uses ecosystem services to provide for a specifically engineered solution to a problem. Ecosystem services, broadly defined fall into four broad categories: provisioning (i.e. production of food and water), regulating (i.e. the control of climate and disease) supporting (i.e. nutrient cycles and crop pollination), and cultural (i.e. spiritual and recreational benefits). Many times only one of these ecosystem services is maximized in the design of the space. For instance a constructed wetland may attempt to maximize the cooling properties of the system to treat water from a wastewater treatment facility before introduction to a river. The designed benefit is a reduction of water temperature for the river system while the constructed wetland itself provides habitat and food for wildlife as well as walking trails for recreation. Most phytotechnology has been focused on the abilities of plants to remove pollutants from the environment. Other technologies such as green roofs, green walls and bioswales are generally considered phytotechnology. Taking a broad view: even parks and landscaping could be viewed as phytotechnology.
However, there is very little consensus over a definition of phytotechnology even within the field. The Phytotechnology Technical
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What substance do the leaves of plants take in from the environment?
A. hydrogen
B. acid rain
C. carbon dioxide
D. oxygen
Answer:
|
|
sciq-3150
|
multiple_choice
|
When the hydrogen is nearly used up, the star can fuse which element into heavier elements?
|
[
"carbon",
"helium",
"xenon",
"oxygen"
] |
B
|
Relavent Documents:
Document 0:::
Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons (protons and neutrons) and nuclei. According to current theories, the first nuclei were formed a few minutes after the Big Bang, through nuclear reactions in a process called Big Bang nucleosynthesis. After about 20 minutes, the universe had expanded and cooled to a point at which these high-energy collisions among nucleons ended, so only the fastest and simplest reactions occurred, leaving our universe containing hydrogen and helium. The rest is traces of other elements such as lithium and the hydrogen isotope deuterium. Nucleosynthesis in stars and their explosions later produced the variety of elements and isotopes that we have today, in a process called cosmic chemical evolution. The amounts of total mass in elements heavier than hydrogen and helium (called 'metals' by astrophysicists) remains small (few percent), so that the universe still has approximately the same composition.
Stars fuse light elements to heavier ones in their cores, giving off energy in the process known as stellar nucleosynthesis. Nuclear fusion reactions create many of the lighter elements, up to and including iron and nickel in the most massive stars. Products of stellar nucleosynthesis remain trapped in stellar cores and remnants except if ejected through stellar winds and explosions. The neutron capture reactions of the r-process and s-process create heavier elements, from iron upwards.
Supernova nucleosynthesis within exploding stars is largely responsible for the elements between oxygen and rubidium: from the ejection of elements produced during stellar nucleosynthesis; through explosive nucleosynthesis during the supernova explosion; and from the r-process (absorption of multiple neutrons) during the explosion.
Neutron star mergers are a recently discovered major source of elements produced in the r-process. When two neutron stars collide, a significant amount of neutron-rich matter may be ej
Document 1:::
The amount of lithium generated in the Big Bang can be calculated. Hydrogen-1 is the most abundant nuclide, comprising roughly 92% of the ato
Document 2:::
In astrophysics, silicon burning is a very brief sequence of nuclear fusion reactions that occur in massive stars with a minimum of about 8–11 solar masses. Silicon burning is the final stage of fusion for massive stars that have run out of the fuels that power them for their long lives in the main sequence on the Hertzsprung–Russell diagram. It follows the previous stages of hydrogen, helium, carbon, neon and oxygen burning processes.
Silicon burning begins when gravitational contraction raises the star's core temperature to 2.7–3.5 billion kelvins (GK). The exact temperature depends on mass. When a star has completed the silicon-burning phase, no further fusion is possible. The star catastrophically collapses and may explode in what is known as a Type II supernova.
Nuclear fusion sequence and silicon photodisintegration
After a star completes the oxygen-burning process, its core is composed primarily of silicon and sulfur. If it has sufficiently high mass, it further contracts until its core reaches temperatures in the range of 2.7–3.5 GK (230–300 keV). At these temperatures, silicon and other elements can photodisintegrate, emitting a proton or an alpha particle. Silicon burning proceeds by photodisintegration rearrangement, which creates new elements by the alpha process, adding one of these freed alpha particles (the equivalent of a helium nucleus) per capture step in the following sequence (photoejection of alphas not shown):
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|}
Although the chain could theoretically continue, steps after nickel-56 are much less exothermic and the temperature is so high that photodisintegration prevents further progress.
Document 3:::
The B2FH paper was a landmark scientific paper on the origin of the chemical elements. The paper's title is Synthesis of the Elements in Stars, but it became known as B2FH from the initials of its authors: Margaret Burbidge, Geoffrey Burbidge, William A. Fowler, and Fred Hoyle. It was written from 1955 to 1956 at the University of Cambridge and Caltech, then published in Reviews of Modern Physics in 1957.
The B2FH paper reviewed stellar nucleosynthesis theory and supported it with astronomical and laboratory data. It identified nucleosynthesis processes that are responsible for producing the elements heavier than iron and explained their relative abundances. The paper became highly influential in both astronomy and nuclear physics.
Nucleosynthesis prior to 1957
Prior to the publication of the B2FH paper, George Gamow advocated a theory of the Universe in which almost all chemical elements, or equivalently atomic nuclei, were synthesized during the Big Bang. Gamow's theory (which differs from present-day Big Bang nucleosynthesis theory) would imply that the abundance of the chemical elements would remain mostly static over time. Hans Bethe and Charles L. Critchfield had shown that the conversion of hydrogen into helium by nuclear fusion could provide the energy required to power stars, by deriving the proton-proton chain (pp-chain) in 1938. Carl von Weizsäcker and Hans Bethe had independently derived the CNO cycle in 1938 and 1939, respectively. Thus, it was known by Gamow and others that the abundances of hydrogen and helium were not perfectly static. According to their view, fusion in stars would produce small amounts of helium, adding only slightly to its abundance from the Big Bang. This stellar nuclear power did not require substantial stellar nucleosynthesis. The elements from carbon upward remained a mystery.
Fred Hoyle offered a hypothesis for the origin of heavy elements. Beginning with a paper in 1946, and expanded upon in 1954, Hoyle proposed that all a
Document 4:::
The oxygen-burning process is a set of nuclear fusion reactions that take place in massive stars that have used up the lighter elements in their cores. Oxygen-burning is preceded by the neon-burning process and succeeded by the silicon-burning process. As the neon-burning process ends, the core of the star contracts and heats until it reaches the ignition temperature for oxygen burning. Oxygen burning reactions are similar to those of carbon burning; however, they must occur at higher temperatures and densities due to the larger Coulomb barrier of oxygen.
Reactions
Oxygen ignites in the temperature range of (1.5–2.6)×109 K and in the density range of (2.6–6.7)×1012 kg·m−3. The principal reactions are given below, where the branching ratios assume that the deuteron channel is open (at high temperatures):
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || ||+ ||9.593 MeV (34%)
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||7.676 MeV (56%)
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||1.459 MeV (5%)
|- style="height:2em;"
| || || ||→ || ||+ ||2 ||+ ||0.381 MeV
|- style="height:2em;"
| || || ||→ || ||+ || ||− ||2.409 MeV (5%)
|- style="height:2em;"
|colspan=99|Alternatively:
|- style="height:2em;"
| || || ||→ || ||+ ||
| +
|16.539 MeV
|- style="height:2em;"
| || || ||→ || ||+ ||2
| −
|0.393 MeV
|}
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When the hydrogen is nearly used up, the star can fuse which element into heavier elements?
A. carbon
B. helium
C. xenon
D. oxygen
Answer:
|
|
sciq-1162
|
multiple_choice
|
What piece of technology can you use to see infrared light?
|
[
"night goggles",
"light meters",
"telescope",
"microscope"
] |
A
|
Relavent Documents:
Document 0:::
The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site.
The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration.
The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors.
History
The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day.
The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015.
In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work.
Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes.
By the start of 2017, there were more than 600 people working at the site.
In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
Multispectral imaging captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or detected with the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, i.e. infrared and ultra-violet. It can allow extraction of additional information the human eye fails to capture with its visible receptors for red, green and blue. It was originally developed for military target identification and reconnaissance. Early space-based imaging platforms incorporated multispectral imaging technology to map details of the Earth related to coastal boundaries, vegetation, and landforms. Multispectral imaging has also found use in document and painting analysis.
Multispectral imaging measures light in a small number (typically 3 to 15) of spectral bands. Hyperspectral imaging is a special case of spectral imaging where often hundreds of contiguous spectral bands are available.
Applications
Military target tracking
Multispectral imaging measures light emission and is often used in detecting or tracking military targets. In 2003, researchers at the United States Army Research Laboratory and the Federal Laboratory Collaborative Technology Alliance reported a dual band multispectral imaging focal plane array (FPA). This FPA allowed researchers to look at two infrared (IR) planes at the same time. Because mid-wave infrared (MWIR) and long wave infrared (LWIR) technologies measure radiation inherent to the object and require no external light source, they also are referred to as thermal imaging methods.
The brightness of the image produced by a thermal imager depends on the objects emissivity and temperature. Every material has an infrared signature that aids in the identification of the object. These signatures are less pronounced in hyperspectral systems (which image in many more bands than multispectral systems) and when exposed to wi
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What piece of technology can you use to see infrared light?
A. night goggles
B. light meters
C. telescope
D. microscope
Answer:
|
|
ai2_arc-109
|
multiple_choice
|
Farmers plant fruit trees in an area that was once a grassy meadow. Which will most likely happen to the rabbits living in the meadow?
|
[
"They will learn to eat fruit.",
"They will learn to climb trees.",
"The number of their young will increase.",
"The size of their population will decrease."
] |
D
|
Relavent Documents:
Document 0:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Farmers plant fruit trees in an area that was once a grassy meadow. Which will most likely happen to the rabbits living in the meadow?
A. They will learn to eat fruit.
B. They will learn to climb trees.
C. The number of their young will increase.
D. The size of their population will decrease.
Answer:
|
|
sciq-2922
|
multiple_choice
|
Which blood vessels is oxygen transferred through?
|
[
"arteries",
"capillaries",
"veins",
"cilia"
] |
B
|
Relavent Documents:
Document 0:::
Veins () are blood vessels in the circulatory system of humans and most other animals that carry blood toward the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are those of the pulmonary and fetal circulations which carry oxygenated blood to the heart. In the systemic circulation arteries carry oxygenated blood away from the heart, and veins return deoxygenated blood to the heart, in the deep veins.
There are three sizes of veins, large, medium, and small. Smaller veins are called venules, and the smallest the post-capillary venules are microscopic that make up the veins of the microcirculation. Veins are often closer to the skin than arteries.
Veins have less smooth muscle and connective tissue and wider internal diameters than arteries. Because of their thinner walls and wider lumens they are able to expand and hold more blood. This greater capacity gives them the term of capacitance vessels. At any time, nearly 70% of the total volume of blood in the human body is in the veins. In medium and large sized veins the flow of blood is maintained by one-way (unidirectional) venous valves to prevent backflow. In the lower limbs this is also aided by muscle pumps, also known as venous pumps that exert pressure on intramuscular veins when they contract and drive blood back to the heart.
Structure
There are three sizes of vein, large, medium, and small. Smaller veins are called venules. The smallest veins are the post-capillary venules. Veins have a similar three-layered structure to arteries. The layers known as tunicae have a concentric arrangement that forms the wall of the vessel. The outer layer, is a thick layer of connective tissue called the tunica externa or adventitia; this layer is absent in the post-capillary venules. The middle layer, consists of bands of smooth muscle and is known as the tunica media. The inner layer, is a thin lining of endothelium known as the tunica intima. The tunica media in the veins is mu
Document 1:::
Great vessels are the large vessels that bring blood to and from the heart. These are:
Superior vena cava
Inferior vena cava
Pulmonary arteries
Pulmonary veins
Aorta
Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels.
Document 2:::
The pulmonary veins are the veins that transfer oxygenated blood from the lungs to the heart. The largest pulmonary veins are the four main pulmonary veins, two from each lung that drain into the left atrium of the heart. The pulmonary veins are part of the pulmonary circulation.
Structure
There are four main pulmonary veins, two from each lung – an inferior and a superior main vein, emerging from each hilum. The main pulmonary veins receive blood from three or four feeding veins in each lung, and drain into the left atrium. The peripheral feeding veins do not follow the bronchial tree. They run between the pulmonary segments from which they drain the blood.
At the root of the lung, the right superior pulmonary vein lies in front of and a little below the pulmonary artery; the inferior is situated at the lowest part of the lung hilum. Behind the pulmonary artery is the bronchus. The right main pulmonary veins (contains oxygenated blood) pass behind the right atrium and superior vena cava; the left in front of the descending thoracic aorta.
Variation
Occasionally the three lobar veins on the right side remain separate, and not infrequently the two left lobar veins end by a common opening into the left atrium. Therefore, the number of pulmonary veins opening into the left atrium can vary between three and five in the healthy population.
The two left lobar veins may be united as a single pulmonary vein in about 25% of people; the two right veins may be united in about 3%.
Function
The pulmonary veins play an essential role in respiration, by receiving blood that has been oxygenated in the alveoli and returning it to the left atrium.
Clinical significance
As part of the pulmonary circulation they carry oxygenated blood back to the heart, as opposed to the veins of the systemic circulation which carry deoxygenated blood.
On chest X-ray, the diameters of pulmonary veins increases from upper to lower lobes, from 3 mm at the first intercoastal space, to 6 mm jus
Document 3:::
Pulmocutaneous circulation is part of the amphibian circulatory system. It is responsible for directing blood to the skin and lungs. Blood flows from the ventricle into an artery called the conus arteriosus and from there into either the left or right truncus arteriosus. They in turn each split the ventricle's output into the pulmocutaneous circuit and the systemic circuit.
See also
Double circulatory system
Document 4:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which blood vessels is oxygen transferred through?
A. arteries
B. capillaries
C. veins
D. cilia
Answer:
|
|
sciq-6145
|
multiple_choice
|
Lysosomes have what type of enzymes that break down old molecules into parts that can be recycled?
|
[
"bacterial",
"probiotics",
"digestive",
"corrosive"
] |
C
|
Relavent Documents:
Document 0:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 1:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 2:::
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Education
Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti
Document 3:::
Every organism requires energy to be active. However, to obtain energy from its outside environment, cells must not only retrieve molecules from their surroundings but also break them down. This process is known as intracellular digestion. In its broadest sense, intracellular digestion is the breakdown of substances within the cytoplasm of a cell. In detail, a phagocyte's duty is obtaining food particles and digesting it in a vacuole. For example, following phagocytosis, the ingested particle (or phagosome) fuses with a lysosome containing hydrolytic enzymes to form a phagolysosome; the pathogens or food particles within the phagosome are then digested by the lysosome's enzymes.
Intracellular digestion can also refer to the process in which animals that lack a digestive tract bring food items into the cell for the purposes of digestion for nutritional needs. This kind of intracellular digestion occurs in many unicellular protozoans, in Pycnogonida, in some molluscs, Cnidaria and Porifera. There is another type of digestion, called extracellular digestion. In amphioxus, digestion is both extracellular and intracellular.
Function
Intracellular digestion is divided into heterophagic digestion and autophagic digestion. These two types take place in the lysosome and they both have very specific functions. Heterophagic intracellular digestion has an important job which is to break down all molecules that are brought into a cell by endocytosis. The degraded molecules need to be delivered to the cytoplasm; however, this will not be possible if the molecules are not hydrolyzed in the lysosome. Autophagic intracellular digestion is processed in the cell, which means it digests the internal molecules.
Autophagy
Generally, autophagy includes three small branches, which are macroautophagy, microautophagy, and chaperone-mediated autophagy.
Occurrence
Most organisms that use intracellular digestion belong to Kingdom Protista, such as amoeba and paramecium.
Amoeba
Amoeba u
Document 4:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Lysosomes have what type of enzymes that break down old molecules into parts that can be recycled?
A. bacterial
B. probiotics
C. digestive
D. corrosive
Answer:
|
|
sciq-6766
|
multiple_choice
|
What is the term for the rate at which velocity changes?
|
[
"stability",
"compression",
"acceleration",
"transmission"
] |
C
|
Relavent Documents:
Document 0:::
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies.
Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration.
Constant velocity vs acceleration
To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed.
For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration.
Difference between speed and velocity
While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction.
Equation of motion
Average velocity
Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some
Document 1:::
Fluid kinematics is a term from fluid mechanics, usually referring to a mere mathematical description or specification of a flow field, divorced from any account of the forces and conditions that might actually create such a flow. The term fluids includes liquids or gases, but also may refer to materials that behave with fluid-like properties, including crowds of people or large numbers of grains if those are describable approximately under the continuum hypothesis as used in continuum mechanics.
Unsteady and convective effects
The composition of the material contains two types of terms: those involving the time derivative and those involving spatial derivatives. The time derivative portion is denoted as the local derivative, and represents the effects of unsteady flow. The local derivative occurs during unsteady flow, and becomes zero for steady flow.
The portion of the material derivative represented by the spatial derivatives is called the convective derivative. It accounts for the variation in fluid property, be it velocity or temperature for example, due to the motion of a fluid particle in space where its values are different.
Acceleration field
The acceleration of a particle is the time rate of change of its velocity. Using an Eulerian description for velocity, the velocity field V = V(x,y,z,t) and employing the material derivative, we obtain the acceleration field.
Document 2:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 3:::
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
This glossary of mechanical engineering terms pertains specifically to mechanical engineering and its sub-disciplines. For a broad overview of engineering, see glossary of engineering.
A
Abrasion – is the process of scuffing, scratching, wearing down, marring, or rubbing away. It can be intentionally imposed in a controlled process using an abrasive. Abrasion can be an undesirable effect of exposure to normal use or exposure to the elements.
Absolute zero – is the lowest possible temperature of a system, defined as zero kelvin or −273.15 °C. No experiment has yet measured a temperature of absolute zero.
Accelerated life testing – is the process of testing a product by subjecting it to conditions (stress, strain, temperatures, voltage, vibration rate, pressure etc.) in excess of its normal service parameters in an effort to uncover faults and potential modes of failure in a short amount of time. By analyzing the product's response to such tests, engineers can make predictions about the service life and maintenance intervals of a product.
Acceleration – In physics, acceleration is the rate of change of velocity of an object with respect to time. An object's acceleration is the net result of any and all forces acting on the object, as described by Newton's Second Law. The SI unit for acceleration is metre per second squared Accelerations are vector quantities (they have magnitude and direction) and add according to the parallelogram law. As a vector, the calculated net force is equal to the product of the object's mass (a scalar quantity) and its acceleration.
Accelerometer – is a device that measures proper acceleration. Proper acceleration, being
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the rate at which velocity changes?
A. stability
B. compression
C. acceleration
D. transmission
Answer:
|
|
sciq-2152
|
multiple_choice
|
What is the conversion of metals from their ores to more useful forms called?
|
[
"thermodynamics",
"nanotechnology",
"crystallography",
"metallurgy"
] |
D
|
Relavent Documents:
Document 0:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 1:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
Document 2:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the conversion of metals from their ores to more useful forms called?
A. thermodynamics
B. nanotechnology
C. crystallography
D. metallurgy
Answer:
|
|
sciq-132
|
multiple_choice
|
What term is used to describe a collection of molecules surrounded by a phospholipid bilayer that is capable of reproducing itself?
|
[
"atom",
"cell",
"organism",
"proteins"
] |
B
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
A bilayer is a double layer of closely packed atoms or molecules.
The properties of bilayers are often studied in condensed matter physics, particularly in the context of semiconductor devices, where two distinct materials are united to form junctions, such as p–n junctions, Schottky junctions, etc. Layered materials, such as graphene, boron nitride, or transition metal dichalcogenides, have unique electronic properties as a bilayer system and are an active area of current research.
In biology a common example is the lipid bilayer, which describes the structure of multiple organic structures, such as the membrane of a cell.
See also
Monolayer
Non-carbon nanotube
Semiconductor
Thin film
Document 2:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 3:::
Like the nucleus, whether to include the vacuole in the protoplasm concept is controversial.
Terminology
Besides "protoplasm", many other related terms and distinctions were used for the cell contents over time. These were as follows:
Urschleim (Oken, 1802, 1809),
Protoplasma (Purkinje, 1840, von Mohl, 1846),
Primordialschlauch (primordial utricle, von Mohl, 1846),
sarcode (Dujardin, 1835, 1841),
Cytoplasma (Kölliker, 1863),
Hautschicht/Körnerschicht (ectoplasm/endoplasm, Pringsheim, 1854; Hofmeister, 1867),
Grundsubstanz (ground substance, Cienkowski, 1863),
metaplasm/protoplasm (Hanstein, 1868),
deutoplasm/protoplasm (van Beneden, 1870),
bioplasm (Beale, 1872),
paraplasm/protoplasm (Kupffer, 1875),
inter-filar substance theory (Velten, 1876)
Hyaloplasma (Pfeffer, 1877),
Protoplast (Hanstein, 1880),
Enchylema/Hyaloplasma (Hanstein, 1880),
Kleinkörperchen or Mikrosomen (small bodies or microsomes, Hanstein, 1882),
paramitome (Flemming, 1882),
Idioplasma (Nageli, 1884),
Zwischensu
Document 4:::
A protocell (or protobiont) is a self-organized, endogenously ordered, spherical collection of lipids proposed as a stepping stone toward the origin of life. A central question in evolution is how simple protocells first arose and how they could differ in reproductive output, thus enabling the accumulation of novel biological emergences over time, i.e. biological evolution. Although a functional protocell has not yet been achieved in a laboratory setting, the goal to understand the process appears well within reach.
Overview
Compartmentalization was important in the origins of life. Membranes form enclosed compartments that are separate from the external environment, thus providing the cell with functionally specialized aqueous spaces. As the lipid bilayer of membranes is impermeable to most hydrophilic molecules (dissolved by water), cells have membrane transport-systems that achieve the import of nutritive molecules as well as the export of waste. It is very challenging to construct protocells from molecular assemblies. An important step in this challenge is the achievement of vesicle dynamics that are relevant to cellular functions, such as membrane trafficking and self-reproduction, using amphiphilic molecules. On the primitive Earth, numerous chemical reactions of organic compounds produced the ingredients of life. Of these substances, amphiphilic molecules might be the first player in the evolution from molecular assembly to cellular life. A step from vesicle toward protocell might be to develop self-reproducing vesicles coupled with the metabolic system.
Another approach to the notion of a protocell concerns the term "chemoton" (short for 'chemical automaton') which refers to an abstract model for the fundamental unit of life introduced by Hungarian theoretical biologist Tibor Gánti. It is the oldest known computational abstract of a protocell. Gánti conceived the basic idea in 1952 and formulated the concept in 1971 in his book The Principles of Life (orig
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term is used to describe a collection of molecules surrounded by a phospholipid bilayer that is capable of reproducing itself?
A. atom
B. cell
C. organism
D. proteins
Answer:
|
|
sciq-2639
|
multiple_choice
|
What do you call any device that makes work easier by changing a force?
|
[
"technology",
"machine",
"battery",
"invention"
] |
B
|
Relavent Documents:
Document 0:::
Machine element or hardware refers to an elementary component of a machine. These elements consist of three basic types:
structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants,
mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and
control components such as buttons, switches, indicators, sensors, actuators and computer controllers.
While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Machine elements are basic mechanical parts and features used as the building blocks of most machines. Most are standardized to common sizes, but customs are also common for specialized applications.
Machine elements may be features of a part (such as screw threads or integral plain bearings) or they may be discrete parts in and of themselves such as wheels, axles, pulleys, rolling-element bearings, or gears. All of the simple machines may be described as machine elements, and many machine elements incorporate concepts of one or more simple machines. For example, a leadscrew incorporates a screw thread, which is an inclined plane wrapped around a cylinder.
Many mechanical design, invention, and engineering tasks involve a knowledge of various machine elements and an intelligent and creative combining of these elements into a component or assembly that fills a need (serves an application).
Structural elements
Beams,
Struts,
Bearings,
Fasteners
Keys,
Splines,
Cotter pin,
Seals
Machine guardings
Mechanical elements
Engine,
Electric motor,
Actuator,
Shafts,
Couplings
Belt,
Chain,
Cable drives,
Gear train,
Clutch,
Brake,
Flywheel,
Cam,
follower systems,
Linkage,
Simple machine
Types
Shafts
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A machine is a physical system using power to apply forces and control movement to perform an action. The term is commonly applied to artificial devices, such as those employing engines or motors, but also to natural biological macromolecules, such as molecular machines. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power, and include a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement. They can also include computers and sensors that monitor performance and plan movement, often called mechanical systems.
Renaissance natural philosophers identified six simple machines which were the elementary devices that put a load into motion, and calculated the ratio of output force to input force, known today as mechanical advantage.
Modern machines are complex systems that consist of structural elements, mechanisms and control components and include interfaces for convenient use. Examples include: a wide range of vehicles, such as trains, automobiles, boats and airplanes; appliances in the home and office, including computers, building air handling and water handling systems; as well as farm machinery, machine tools and factory automation systems and robots.
Etymology
The English word machine comes through Middle French from Latin , which in turn derives from the Greek (Doric , Ionic 'contrivance, machine, engine', a derivation from 'means, expedient, remedy'). The word mechanical (Greek: ) comes from the same Greek roots. A wider meaning of 'fabric, structure' is found in classical Latin, but not in Greek usage. This meaning is found in late medieval French, and is adopted from the French into English in the mid-16th century.
In the 17th century, the word machine could also mean a scheme or plot, a meaning now expressed by the derived machination. The modern meaning develops out of specialized application of the term to st
Document 3:::
A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material.
The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Nomenclature and key concepts, interrelated
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels m
Document 4:::
In electrical engineering, electric machine is a general term for machines using electromagnetic forces, such as electric motors, electric generators, and others. They are electromechanical energy converters: an electric motor converts electricity to mechanical power while an electric generator converts mechanical power to electricity. The moving parts in a machine can be rotating (rotating machines) or linear (linear machines). Besides motors and generators, a third category often included is transformers, which although they do not have any moving parts are also energy converters, changing the voltage level of an alternating current.
Electric machines, in the form of synchronous and induction generators, produce about 95% of all electric power on Earth (as of early 2020s), and in the form of electric motors consume approximately 60% of all electric power produced. Electric machines were developed beginning in the mid 19th century and since that time have been a ubiquitous component of the infrastructure. Developing more efficient electric machine technology is crucial to any global conservation, green energy, or alternative energy strategy.
Generator
An electric generator is a device that converts mechanical energy to electrical energy. A generator forces electrons to flow through an external electrical circuit. It is somewhat analogous to a water pump, which creates a flow of water but does not create the water inside. The source of mechanical energy, the prime mover, may be a reciprocating or turbine steam engine, water falling through a turbine or waterwheel, an internal combustion engine, a wind turbine, a hand crank, compressed air or any other source of mechanical energy.
The two main parts of an electrical machine can be described in either mechanical or electrical terms. In mechanical terms, the rotor is the rotating part, and the stator is the stationary part of an electrical machine. In electrical terms, the armature is the power-producing compo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call any device that makes work easier by changing a force?
A. technology
B. machine
C. battery
D. invention
Answer:
|
|
sciq-8231
|
multiple_choice
|
Tar sands are rocky materials mixed with what?
|
[
"coal",
"very thick oil",
"magma",
"shale"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Physics of Blown Sand and Desert Dunes is a scientific book written by Ralph A. Bagnold. The book laid the foundations of the scientific investigation of the transport of sand by wind. It also discusses the formation and movement of sand dunes in the Libyan Desert. During his expeditions into the Libyan Desert, Bagnold had been fascinated by the shapes of the sand dunes, and after returning to England he built a wind tunnel and conducted the experiments which are the basis of the book.
Bagnold finished writing the book in 1939, and it was first published on 26 June 1941. A reprinted version, with minor revisions by Bagnold, was published by Chapman and Hall in 1953, and reprinted again in 1971. The book was reissued by Dover Publications in 2005.
The book explores the movement of sand in desert environments, with a particular emphasis on how wind affects the formation and movement of dunes and ripples. Bagnold's interest in this subject was spurred by his extensive desert expeditions, during which he observed various sand storms. One pivotal observation was that the movement of sand, unlike that of dust, predominantly occurs near the ground, within a height of one metre, and was less influenced by large-scale eddy currents in the air.
The book emphasises the feasibility of replicating these natural phenomena under controlled conditions in a laboratory. By using a wind tunnel, Bagnold sought to gain a deeper understanding of the physics governing the interaction between airstreams and sand grains, and vice versa. His aim was to ensure that findings from controlled experiments mirrored real-world conditions, with verifications of these laboratory results conducted through field observations in the Libyan Desert in the late 1930s.
Bagnold delineates his research into two distinct stages. The first, which constitutes the primary focus of the book, investigates the dynamics of sand movement across mostly flat terrains. This includes understanding how sand is l
Document 2:::
Desert varnish or rock varnish is an orange-yellow to black coating found on exposed rock surfaces in arid environments. Desert varnish is approximately one micrometer thick and exhibits nanometer-scale layering. Rock rust and desert patina are other terms which are also used for the condition, but less often.
Formation
Desert varnish forms only on physically stable rock surfaces that are no longer subject to frequent precipitation, fracturing or wind abrasion. The varnish is primarily composed of particles of clay along with oxides of iron and manganese. There is also a host of trace elements and almost always some organic matter. The color of the varnish varies from shades of brown to black.
It has been suggested that desert varnish should be investigated as a potential candidate for a "shadow biosphere". However, a 2008 microscopy study posited that desert varnish has already been reproduced with chemistry not involving life in the lab, and that the main component is actually silica and not clay as previously thought. The study notes that desert varnish is an excellent fossilizer for microbes and indicator of water. Desert varnish appears to have been observed by rovers on Mars, and if examined may contain fossilized life from Mars's wet period.
Composition
Originally scientists thought that the varnish was made from substances drawn out of the rocks it coats. Microscopic and microchemical observations, however, show that a major part of varnish is clay, which could only arrive by wind. Clay, then, acts as a substrate to catch additional substances that chemically react together when the rock reaches high temperatures in the desert sun. Wetting by dew is also important in the process.
An important characteristic of black desert varnish is that it has an unusually high concentration of manganese. Manganese is relatively rare in the Earth's crust, making up only 0.12% of its weight. In black desert varnish, however, manganese is 50 to 60 times more abundan
Document 3:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
Document 4:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Tar sands are rocky materials mixed with what?
A. coal
B. very thick oil
C. magma
D. shale
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.