id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-10409
|
multiple_choice
|
What is it called when your joints start to wear out and they become stiff and painful?
|
[
"endometriosis",
"tendonitis",
"arthritis",
"adenitis"
] |
C
|
Relavent Documents:
Document 0:::
Arthritis of the knee is typically a particularly debilitating form of arthritis. The knee may become affected by almost any form of arthritis.
The word arthritis refers to inflammation of the joints. Types of arthritis include those related to wear and tear of cartilage, such as osteoarthritis, to those associated with inflammation resulting from an overactive immune system (such as rheumatoid arthritis).
Causes
It is not always certain why arthritis of the knee develops. The knee may become affected by almost any form of arthritis, including those related to mechanical damage of the structures of the knee (osteoarthritis, and post-traumatic arthritis), various autoimmune forms of arthritis (including; rheumatoid arthritis, juvenile arthritis, and SLE-related arthritis, psoriatic arthritis, and ankylosing spondylitis), arthritis due to infectious causes (including Lyme disease-related arthritis), gouty arthritis, or reactive arthritis.
Osteoarthritis of the knee
The knee is one of the joints most commonly affected by osteoarthritis. Cartilage in the knee may begin to break down after sustained stress, leaving the bones of the knee rubbing against each other and resulting in osteoarthritis. Nearly a third of US citizens are affected by osteoarthritis of the knee by age 70.
Obesity is a known and very significant risk factor for the development of osteoarthritis. Risk increases proportionally to body weight. Obesity contributes to OA development, not only by increasing the mechanical stress exerted upon the knees when standing, but also leads to increased production of compounds that may cause joint inflammation.
Parity is associated with an increased risk of knee OA and likelihood of knee replacement. The risk increases in proportion to the number of children the woman has birthed. This may be due to weight gain after pregnancy, or increased body weight and consequent joint stress during pregnancy.
Flat feet are a significant risk factor for the development
Document 1:::
Epicondylitis is the inflammation of an epicondyle or of adjacent tissues. Epicondyles are on the medial and lateral aspects of the elbow, consisting of the two bony prominences at the distal end of the humerus. These bony projections serve as the attachment point for the forearm musculature. Inflammation to the tendons and muscles at these attachment points can lead to medial and/or lateral epicondylitis. This can occur through a range of factors that overuse the muscles that attach to the epicondyles, such as sports or job-related duties that increase the workload of the forearm musculature and place stress on the elbow. Lateral epicondylitis is also known as “Tennis Elbow” due to its sports related association to tennis athletes, while medial epicondylitis is often referred to as “golfer's elbow.”
Risk factors
In a cross-sectional population-based study among the working population, it was found that psychological distress and bending and straightening of the elbow joint for >1hr per day were associated risk factors to epicondylitis.
Another study revealed the following potential risk factors among the working population:
Force and repetitive motions (handling tools > 1 kg, handling loads >20 kg at least 10 times/day, repetitive movements > 2 h/day) were found to be associated with the occurrence of lateral epicondylitis.
Low job control and low social support were also found to be associated with lateral epicondylitis.
Exposures of force (handling loads >5 kg, handling loads >20 kg at least 10 times/day, high hand grip forces >1 h/day), repetitiveness (repetitive movements for >2 h/day) and vibration (working with vibrating tools > 2 h/day) were associated with medial epicondylitis.
In addition to repetitive activities, obesity and smoking have been implicated as independent risk factors.
Symptoms
Tender to palpation at the medial or lateral epicondyle
Pain or difficulty with wrist flexion or extension
Diminished grip strength
Pain or burning se
Document 2:::
Arthritis is a term often used to mean any disorder that affects joints. Symptoms generally include joint pain and stiffness. Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints. In some types of arthritis, other organs are also affected. Onset can be gradual or sudden.
There are over 100 types of arthritis. The most common forms are osteoarthritis (degenerative joint disease) and rheumatoid arthritis. Osteoarthritis usually occurs with age and affects the fingers, knees, and hips. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet. Other types include gout, lupus, fibromyalgia, and septic arthritis. They are all types of rheumatic disease.
Treatment may include resting the joint and alternating between applying ice and heat. Weight loss and exercise may also be useful. Recommended medications may depend on the form of arthritis. These may include pain medications such as ibuprofen and paracetamol (acetaminophen). In some circumstances, a joint replacement may be useful.
Osteoarthritis affects more than 3.8% of people, while rheumatoid arthritis affects about 0.24% of people. Gout affects about 1–2% of the Western population at some point in their lives. In Australia about 15% of people are affected by arthritis, while in the United States more than 20% have a type of arthritis. Overall the disease becomes more common with age. Arthritis is a common reason that people miss work and can result in a decreased quality of life. The term is derived from arthr- (meaning 'joint') and -itis (meaning 'inflammation').
Classification
There are several diseases where joint pain is primary, and is considered the main feature. Generally when a person has "arthritis" it means that they have one of these diseases, which include:
Hemarthrosis
Osteoarthritis
Rheumatoid arthritis
Gout and pseudo-gout
Septic arthritis
Ankylosing spondylitis
Juvenile idiopathic arthritis
Still's disease
Document 3:::
Musculoskeletal injury refers to damage of muscular or skeletal systems, which is usually due to a strenuous activity and includes damage to skeletal muscles, bones, tendons, joints, ligaments, and other affected soft tissues. In one study, roughly 25% of approximately 6300 adults received a musculoskeletal injury of some sort within 12 months—of which 83% were activity-related. Musculoskeletal injury spans into a large variety of medical specialties including orthopedic surgery (with diseases such as arthritis requiring surgery), sports medicine, emergency medicine (acute presentations of joint and muscular pain) and rheumatology (in rheumatological diseases that affect joints such as rheumatoid arthritis).
Musculoskeletal injuries can affect any part of the human body including; bones, joints, cartilages, ligaments, tendons, muscles, and other soft tissues. Symptoms include mild to severe aches, low back pain, numbness, tingling, atrophy and weakness. These injuries are a result of repetitive motions and actions over a period of time. Tendons connect muscle to bone whereas ligaments connect bone to bone. Tendons and ligaments play an active role in maintain joint stability and controls the limits of joint movements, once injured tendons and ligaments detrimentally impact motor functions. Continuous exercise or movement of a musculoskeletal injury can result in chronic inflammation with progression to permanent damage or disability.
In many cases, during the healing period after a musculoskeletal injury, a period in which the healing area will be completely immobile, a cast-induced muscle atrophy can occur. Routine sessions of physiotherapy after the cast is removed can help return strength in limp muscles or tendons. Alternately, there exist different methods of electrical stimulation of the immobile muscles which can be induced by a device placed underneath a cast, helping prevent atrophies Preventative measures include correcting or modifying one's postures a
Document 4:::
Tennis elbow, also known as lateral epicondylitis or enthesopathy of the extensor carpi radialis origin, is an enthesopathy (attachment point disease) of the origin of the extensor carpi radialis brevis on the lateral epicondyle. The outer part of the elbow becomes painful and tender. The pain may also extend into the back of the forearm. Onset of symptoms is generally gradual although they can seem sudden and be misinterpreted as an injury. Golfer's elbow is a similar condition that affects the inside of the elbow.
Enthesopathies are idiopathic, meaning science has not yet determined the cause. Enthesopathies are most common in middle age (ages 35 to 60).
It is often stated that the condition is caused by excessive use of the muscles of the back of the forearm, but this is not supported by experimental evidence and is a common misinterpretation or unhelpful thought about symptoms. It may be associated with work or sports, classically racquet sports (including paddle sports), but most people with the condition are not exposed to these activities. The diagnosis is based on the symptoms and examination. Medical imaging is not particularly useful. Signs consistent with the diagnosis include pain when a subject tries to bend back the wrist against resistance.
The natural history of untreated enthesopathy is resolution over a period of 1–2 years. Palliative (symptoms alleviating) treatment may include pain medications such as NSAIDS or acetaminophen (paracetamol), a wrist brace, or a strap over the upper forearm. The role of corticosteroid injections is debated. Recent evidence suggests corticosteroid injections may delay symptom resolution.
Signs and symptoms
Pain on the outer part of the elbow (lateral epicondyle)
Point tenderness over the lateral epicondyle—a prominent part of the bone on the outside of the elbow
Pain with resisted wrist extension or passive wrist flexion
Symptoms associated with tennis elbow include, but are not limited to, pain from the out
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is it called when your joints start to wear out and they become stiff and painful?
A. endometriosis
B. tendonitis
C. arthritis
D. adenitis
Answer:
|
|
sciq-5763
|
multiple_choice
|
How are the number of moles of carbon dioxide gas calculated?
|
[
"phytochemistry",
"relativistic",
"stoichiometry",
"casuistry"
] |
C
|
Relavent Documents:
Document 0:::
In chemistry, the mole map is a graphical representation of an algorithm that compares molar mass, number of particles per mole, and factors from balanced equations or other formulae.
Stoichiometry
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
The CRC Handbook of Chemistry and Physics is a comprehensive one-volume reference resource for science research. First published in 1914, it is currently () in its 103rd edition, published in 2022. It is sometimes nicknamed the "Rubber Bible" or the "Rubber Book", as CRC originally stood for "Chemical Rubber Company".
As late as the 1962–1963 edition (3604 pages) the Handbook contained myriad information for every branch of science and engineering. Sections in that edition include: Mathematics, Properties and Physical Constants, Chemical Tables, Properties of Matter, Heat, Hygrometric and Barometric Tables, Sound, Quantities and Units, and Miscellaneous. Earlier editions included sections such as "Antidotes of Poisons", "Rules for Naming Organic Compounds", "Surface Tension of Fused Salts", "Percent Composition of Anti-Freeze Solutions", "Spark-gap Voltages", "Greek Alphabet", "Musical Scales", "Pigments and Dyes", "Comparison of Tons and Pounds", "Twist Drill and Steel Wire Gauges" and "Properties of the Earth's Atmosphere at Elevations up to 160 Kilometers". Later editions focus almost exclusively on chemistry and physics topics and eliminated much of the more "common" information.
Contents by edition
22nd–44th Editions
Section A: Mathematical Tables
Section B: Properties and Physical Constants
Section C: General Chemical Tables/Specific Gravity and Properties of Matter
Section D: Heat and Hygrometry/Sound/Electricity and Magnetism/Light
Section E: Quantities and Units/Miscellaneous
Index
45th–70th Editions
Section A: Mathematical Tables
Section B: Elements and Inorganic Compounds
Section C: Organic Compounds
Section D: General Chemical
Section E: General Physical Constants
Section F: Miscellaneous
Index
71st–102nd Editions
Section 1: Basic Constants, Units, and Conversion Factors
Section 2: Symbols, Terminology, and Nomenclature
Section 3: Physical Constants of Organic Compounds
Section 4: Properties of the Elements and Inorganic Com
Document 3:::
The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084
Oxygen 20.9476
Argon Ar 0.934
Carbon Dioxide 0.0314
Gas composition of air
To give a familiar example, air has a composition of:
Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass.
It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state.
The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air:
ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1.
GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote.
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How are the number of moles of carbon dioxide gas calculated?
A. phytochemistry
B. relativistic
C. stoichiometry
D. casuistry
Answer:
|
|
sciq-1882
|
multiple_choice
|
What is the science that describes the ancestral and descendant connections between organisms?
|
[
"experimentally",
"phylogeny",
"organic science",
"polygamy"
] |
B
|
Relavent Documents:
Document 0:::
Merriam-Webster defines chemotaxonomy as the method of biological classification based on similarities and dissimilarity in the structure of certain compounds among the organisms being classified. Advocates argue that, as proteins are more closely controlled by genes and less subjected to natural selection than the anatomical features, they are more reliable indicators of genetic relationships. The compounds studied most are proteins, amino acids, nucleic acids, peptides etc.
Physiology is the study of working of organs in a living being. Since working of the organs involves chemicals of the body, these compounds are called biochemical evidences. The study of morphological change has shown that there are changes in the structure of animals which result in evolution. When changes take place in the structure of a living organism, they will naturally be accompanied by changes in the physiological or biochemical processes.
John Griffith Vaughan was one of the pioneers of chemotaxonomy.
Biochemical products
The body of any animal in the animal kingdom is made up of a number of chemicals. Of these, only a few biochemical products have been taken into consideration to derive evidence for evolution.
Protoplasm: Every living cell, from a bacterium to an elephant, from grasses to the blue whale, has protoplasm. Though the complexity and constituents of the protoplasm increases from lower to higher living organism, the basic compound is always the protoplasm. Evolutionary significance: From this evidence, it is clear that all living things have a common origin point or a common ancestor, which in turn had protoplasm. Its complexity increased due to changes in the mode of life and habitat.
Nucleic acids: DNA and RNA are the two types of nucleic acids present in all living organisms. They are present in the chromosomes. The structure of these acids has been found to be similar in all animals. DNA always has two chains forming a double helix, and each chain is made up of nuc
Document 1:::
The branches of science known informally as omics are various disciplines in biology whose names end in the suffix -omics, such as genomics, proteomics, metabolomics, metagenomics, phenomics and transcriptomics. Omics aims at the collective characterization and quantification of pools of biological molecules that translate into the structure, function, and dynamics of an organism or organisms.
The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome or metabolome respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; it is an example of a "neo-suffix" formed by abstraction from various Greek terms in , a sequence that does not form an identifiable suffix in Greek.
Functional genomics aims at identifying the functions of as many genes as possible of a given organism. It combines
different -omics techniques such as transcriptomics and proteomics with saturated mutant collections.
Origin
The Oxford English Dictionary (OED) distinguishes three different fields of application for the -ome suffix:
in medicine, forming nouns with the sense "swelling, tumour"
in botany or zoology, forming nouns in the sense "a part of an animal or plant with a specified structure"
in cellular and molecular biology, forming nouns with the sense "all constituents considered collectively"
The -ome suffix originated as a variant of -oma, and became productive in the last quarter of the 19th century. It originally appeared in terms like sclerome or rhizome. All of these terms derive from Greek words in , a sequence that is not a single suffix, but analyzable as , the belonging to the word stem (usually a verb) and the being a genuine Greek suffix forming abstract nouns.
The OED suggests that its third definition originated as a back-formation from mitome, Early attestations include biome (1916) and genome (first coined as German Genom in 1920).
The association with chromosome in molecular bio
Document 2:::
Phylogeny in psychoanalysis is the study of the whole family or species of an organism in order to better understand the pre-history of it. It might have an unconscious influence on a patient, according to Sigmund Freud. After the possibilities of ontogeny, which is the development of the whole organism viewed from the light of occurrences during the course of its life, have been exhausted, phylogeny might shed more light on the pre-history of an organism.
The term phylogeny derives from the Greek terms phyle (φυλή) and phylon (φῦλον), denoting “tribe” and “race”; and the term genetikos (γενετικός), denoting “relative to birth”, from genesis (γένεσις) “origin” and “birth”. Phylogenetics () is the study of evolutionary relatedness among groups of organisms (e.g. species, populations), In biology this is discovered through molecular sequencing data and morphological data matrices (phylogenetics), while in psychoanalysis this is discovered by analysis of the memories of a patient and the relatives.
Document 3:::
Comparative biology uses natural variation and disparity to understand the patterns of life at all levels—from genes to communities—and the critical role of organisms in ecosystems. Comparative biology is a cross-lineage approach to understanding the phylogenetic history of individuals or higher taxa and the mechanisms and patterns that drives it. Comparative biology encompasses Evolutionary Biology, Systematics, Neontology, Paleontology, Ethology, Anthropology, and Biogeography as well as historical approaches to Developmental biology, Genomics, Physiology, Ecology and many other areas of the biological sciences. The comparative approach also has numerous applications in human health, genetics, biomedicine, and conservation biology. The biological relationships (phylogenies, pedigree) are important for comparative analyses and usually represented by a phylogenetic tree or cladogram to differentiate those features with single origins (Homology) from those with multiple origins (Homoplasy).
See also
Cladistics
Comparative Anatomy
Evolution
Evolutionary Biology
Systematics
Bioinformatics
Neontology
Paleontology
Phylogenetics
Genomics
Evolutionary biology
Comparisons
Document 4:::
A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer).
Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans.
In modern times, most biologists have one or more academic degrees such as a bachelor's degree plus an advanced degree like a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government.
History
Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells.
Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated in the principles of inheritance in 1866, which became the basis of modern genetics.
In 1953, James D. Watson and Francis
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the science that describes the ancestral and descendant connections between organisms?
A. experimentally
B. phylogeny
C. organic science
D. polygamy
Answer:
|
|
sciq-2856
|
multiple_choice
|
Proper kidney function is essential for homeostasis of what level, which in turn helps ensure the functioning of enzymes?
|
[
"ions",
"ph",
"calcium",
"oxygen"
] |
B
|
Relavent Documents:
Document 0:::
Assessment of kidney function occurs in different ways, using the presence of symptoms and signs, as well as measurements using urine tests, blood tests, and medical imaging.
Functions of a healthy kidney include maintaining a person's fluid balance, maintaining an acid-base balance; regulating electrolytes including sodium, potassium, and other electrolytes; clearing toxins; regulating blood pressure; and regulating hormones, such as erythropoietin; and activation of vitamin D.
Description
The functions of the kidney include maintenance of acid-base balance; regulation of fluid balance; regulation of sodium, potassium, and other electrolytes; clearance of toxins; absorption of glucose, amino acids, and other small molecules; regulation of blood pressure; production of various hormones, such as erythropoietin; and activation of vitamin D.
The GFR is regarded as the best overall measure of the kidney's ability to carry out these numerous functions. An estimate of the GFR is used clinically to determine the degree of kidney impairment and to track the progression of the disease. The GFR, however, does not reveal the source of the kidney disease. This is accomplished by urinalysis, measurement of urine protein excretion, kidney imaging, and, if necessary, kidney biopsy.
Much of renal physiology is studied at the level of the nephron the smallest functional unit of the kidney. Each nephron begins with a filtration component that filters the blood entering the kidney. This filtrate then flows along the length of the nephron, which is a tubular structure lined by a single layer of specialized cells and surrounded by capillaries. The major functions of these lining cells are the reabsorption of water and small molecules from the filtrate into the blood, and the secretion of wastes from the blood into the urine.
Proper function of the kidney requires that it receives and adequately filters blood. This is performed at the microscopic level by many hundreds of thousa
Document 1:::
This is a table of permselectivity for different substances in the glomerulus of the kidney in renal filtration.
Document 2:::
Cystatin C or cystatin 3 (formerly gamma trace, post-gamma-globulin, or neuroendocrine basic polypeptide), a protein encoded by the CST3 gene, is mainly used as a biomarker of kidney function. Recently, it has been studied for its role in predicting new-onset or deteriorating cardiovascular disease. It also seems to play a role in brain disorders involving amyloid (a specific type of protein deposition), such as Alzheimer's disease.
In humans, all cells with a nucleus (cell core containing the DNA) produce cystatin C as a chain of 120 amino acids. It is found in virtually all tissues and body fluids. It is a potent inhibitor of lysosomal proteinases (enzymes from a special subunit of the cell that break down proteins) and probably one of the most important extracellular inhibitors of cysteine proteases (it prevents the breakdown of proteins outside the cell by a specific type of protein degrading enzymes). Cystatin C belongs to the type 2 cystatin gene family.
Role in medicine
Kidney function
Glomerular filtration rate (GFR), a marker of kidney health, is most accurately measured by injecting compounds such as inulin, radioisotopes such as 51chromium-EDTA, 125I-iothalamate, 99mTc-DTPA or radiocontrast agents such as iohexol, but these techniques are complicated, costly, time-consuming and have potential side-effects.
Creatinine is the most widely used biomarker of kidney function. It is inaccurate at detecting mild renal impairment, and levels can vary with muscle mass but not with protein intake. Urea levels might change with protein intake.
Formulas such as the Cockcroft and Gault formula and the MDRD formula (see Renal function) try to adjust for these variables.
Cystatin C has a low molecular weight (approximately 13.3 kilodaltons), and it is removed from the bloodstream by glomerular filtration in the kidneys. If kidney function and glomerular filtration rate decline, the blood levels of cystatin C rise. Cross-sectional studies (based on a single point in t
Document 3:::
The rock dove, Columbia livia, has a number of special adaptations for regulating water uptake and loss.
Challenges
C. livia pigeons drink directly by water source or indirectly from the food they ingest. They drink water through a process called double-suction mechanism. The daily diet of the pigeon places many physiological challenges that it must overcome through osmoregulation. Protein intake, for example, causes an excess of toxins of amine groups when it is broken down for energy. To regulate this excess and secrete these unwanted toxins, C. livia must remove the amine groups as uric acid. Nitrogen excretion through uric acid can be considered an advantage because it does not require a lot of water, but producing it takes more energy because of its complex molecular composition.
Pigeons adjust their drinking rates and food intake in parallel, and when adequate water is unavailable for excretion, food intake is limited to maintain water balance. As this species inhabits arid environments, research attributes this to their strong flying capabilities to reach the available water sources, not because of exceptional potential for water conservation. C. livia kidneys, like mammalian kidneys, are capable of producing urine hyperosmotic to the plasma using the processes of filtration, reabsorption, and secretion. The medullary cones function as countercurrent units that achieve the production of hyperosmotic urine. Hyperosmotic urine can be understood in light of the law of diffusion and osmolarity.
Organ of osmoregulation
Unlike a number of other bird species which have the salt gland as the primary osmoregulatory organ, C. livia does not use its salt gland. It uses the function of the kidneys to maintain homeostatic balance of ions such as sodium and potassium while preserving water quantity in the body. Filtration of the blood, reabsorption of ions and water, and secretion of uric acid are all components of the kidney's process. Columba livia has two kidneys th
Document 4:::
Chloride is an anion in the human body needed for metabolism (the process of turning food into energy). It also helps keep the body's acid-base balance. The amount of serum chloride is carefully controlled by the kidneys.
Chloride ions have important physiological roles. For instance, in the central nervous system, the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons. Also, the chloride-bicarbonate exchanger biological transport protein relies on the chloride ion to increase the blood's capacity of carbon dioxide, in the form of the bicarbonate ion; this is the mechanism underpinning the chloride shift occurring as the blood passes through oxygen-consuming capillary beds.
The normal blood reference range of chloride for adults in most labs is 96 to 106 milliequivalents (mEq) per liter. The normal range may vary slightly from lab to lab. Normal ranges are usually shown next to results in the lab report. A diagnostic test may use a chloridometer to determine the serum chloride level.
The North American Dietary Reference Intake recommends a daily intake of between 2300 and 3600 mg/day for 25-year-old males.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Proper kidney function is essential for homeostasis of what level, which in turn helps ensure the functioning of enzymes?
A. ions
B. ph
C. calcium
D. oxygen
Answer:
|
|
sciq-6681
|
multiple_choice
|
At a convergent plate boundary, when one plate is oceanic, there are large what?
|
[
"earthquakes",
"lakes",
"plateaus",
"volcanoes"
] |
D
|
Relavent Documents:
Document 0:::
Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region.
Geology
Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago.
Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago.
At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged.
Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum.
Today, the sea floor between these four islands is relatively shallow
Document 1:::
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
The D″ region
The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r
Document 2:::
In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range.
Overview
In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates.
Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments.
An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia.
Paleontological use
When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin).
Document 3:::
Slab pull is a geophysical mechanism whereby the cooling and subsequent densifying of a subducting tectonic plate produces a downward force along the rest of the plate. In 1975 Forsyth and Uyeda used the inverse theory method to show that, of the many forces likely to be driving plate motion, slab pull was the strongest. Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches. This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%.
Carlson et al. (1983) in Lallemandet al. (2005) defined the slab pull force as:
Where:
K is (gravitational acceleration = 9.81 m/s2) according to McNutt (1984);
Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere;
L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary);
A is the slab age in Ma at the trench.
The slab pull force manifests itself between two extreme forms:
The aseismic back-arc extension as in the Izu–Bonin–Mariana Arc.
And as the Aleutian and Chile tectonics with strong earthquakes and back-arc thrusting.
Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow.
Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates
Document 4:::
Plume tectonics is a geoscientific theory that finds its roots in the mantle doming concept which was especially popular during the 1930s and initially did not accept major plate movements and continental drifting. It has survived from the 1970s until today in various forms and presentations. It has slowly evolved into a concept that recognises and accepts large-scale plate motions such as envisaged by plate tectonics, but placing them in a framework where large mantle plumes are the major driving force of the system. The initial followers of the concept during the first half of the 20th century are scientists like Beloussov and van Bemmelen, and recently the concept has gained interest especially in Japan, through new compiled work on palaeomagnetism, and is still advocated by the group of scientists elaboration upon Earth expansion. It is nowadays generally not accepted as the main theory to explain the driving forces of tectonic plate movements, although numerous modulations on the concept have been proposed.
The theory focuses on the movements of mantle plumes under tectonic plates, viewing them as the major driving force of movements of (parts of) the Earth's crust. In its more modern form, conceived in the 1970s, it tries to reconcile in one single geodynamic model the horizontalistic concept of plate tectonics, and the verticalistic concepts of mantle plumes, by the gravitational movement of plates away from major domes of the Earth's crust. The existence of various supercontinents in Earth history and their break-up has been associated recently with major upwellings of the mantle.
It is classified together with mantle convection as one of the mechanism that are used to explain the movements of tectonic plates. It also shows affinity with the concept of hot spots which is used in modern-day plate tectonics to generate a framework of specific mantle upwelling points that are relatively stable throughout time and are used to calibrate the plate movements usin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
At a convergent plate boundary, when one plate is oceanic, there are large what?
A. earthquakes
B. lakes
C. plateaus
D. volcanoes
Answer:
|
|
sciq-3524
|
multiple_choice
|
All the genes in all the members of a population make up its what?
|
[
"diversity",
"longevity",
"phenotype",
"gene pool"
] |
D
|
Relavent Documents:
Document 0:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Document 1:::
The following outline is provided as an overview of and topical guide to genetics:
Genetics – science of genes, heredity, and variation in living organisms. Genetics deals with the molecular structure and function of genes, and gene behavior in context of a cell or organism (e.g. dominance and epigenetics), patterns of inheritance from parent to offspring, and gene distribution, variation and change in populations.
Introduction to genetics
Introduction to genetics
Genetics
Chromosome
DNA
Genetic diversity
Genetic drift
Genetic variation
Genome
Heredity
Mutation
Nucleotide
RNA
Introduction to evolution
Evolution
Modern evolutionary synthesis
Transmutation of species
Natural selection
Extinction
Adaptation
Polymorphism (biology)
Gene flow
Biodiversity
Biogeography
Phylogenetic tree
Taxonomy (biology)
Mendelian inheritance
Molecular evolution
Branches of genetics
Classical genetics
Developmental genetics
Conservation genetics
Ecological genetics
Evolutionary genetics
Genetic engineering
Metagenics
Genetic epidemiology
Archaeogenetics
Archaeogenetics of the Near East
Genetics of intelligence
Genetic testing
Genomics
Human genetics
Human evolutionary genetics
Human mitochondrial genetics
Medical genetics
Microbial genetics
Molecular genetics
Neurogenetics
Population genetics
Plant genetics
Psychiatric genetics
Quantitative genetics
Statistical genetics
Multi-disciplinary fields that include genetics
Evolutionary anthropology
History of genetics
History of genetics
Natural history of genetics
History of molecular evolution
Cladistics
Transitional fossil
Extinction event
Timeline of the evolutionary history of life
History of the science of genetics
History of genetics
Ancient Concepts of Heredity
Experiments on Plant Hybridization
History of evolutionary thought
History of genetic engineering
History of genomics
History of paleontology
History of plant systematics
Neanderthal genome pro
Document 2:::
Genome-wide complex trait analysis (GCTA) Genome-based restricted maximum likelihood (GREML) is a statistical method for heritability estimation in genetics, which quantifies the total additive contribution of a set of genetic variants to a trait. GCTA is typically applied to common single nucleotide polymorphisms (SNPs) on a genotyping array (or "chip") and thus termed "chip" or "SNP" heritability.
GCTA operates by directly quantifying the chance genetic similarity of unrelated individuals and comparing it to their measured similarity on a trait; if two unrelated individuals are relatively similar genetically and also have similar trait measurements, then the measured genetics are likely to causally influence that trait, and the correlation can to some degree tell how much. This can be illustrated by plotting the squared pairwise trait differences between individuals against their estimated degree of relatedness. GCTA makes a number of modeling assumptions and whether/when these assumptions are satisfied continues to be debated.
The GCTA framework has also been extended in a number of ways: quantifying the contribution from multiple SNP categories (i.e. functional partitioning); quantifying the contribution of Gene-Environment interactions; quantifying the contribution of non-additive/non-linear effects of SNPs; and bivariate analyses of multiple phenotypes to quantify their genetic covariance (co-heritability or genetic correlation).
GCTA estimates have implications for the potential for discovery from Genome-wide Association Studies (GWAS) as well as the design and accuracy of polygenic scores. GCTA estimates from common variants are typically substantially lower than other estimates of total or narrow-sense heritability (such as from twin or kinship studies), which has contributed to the debate over the Missing heritability problem.
History
Estimation in biology/animal breeding using standard ANOVA/REML methods of variance components such as heritability,
Document 3:::
A diversity panel is a collection of genetic material or individual samples taken from a diverse population of a certain species. The idea is to illustrate the genetic and phenotypic diversity of the species.
Diversity panels exist for human populations, mouse and other organisms.
Researchers in the area of genetics often use diversity panels in order to reveal genotypes that are linked to certain traits, such as in QTL mapping with Genome-wide association study.
Those study analyze the Gene–environment interaction underneath simple and complex traits.
Examples
Human Genome Diversity Project
The Hybrid Mouse Diversity Panel
Maize NAM population (Nested association mapping)
Arabidopsis thaliana 1001 Genome project
See also
Genetics
Biodiversity
Evolution
Document 4:::
Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation.
Some traits are part of an organism's physical appearance, such as eye color, height or weight. Other sorts of traits are not easily seen and include blood types or resistance to diseases. Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle.
Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code, which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism.
The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele. As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All the genes in all the members of a population make up its what?
A. diversity
B. longevity
C. phenotype
D. gene pool
Answer:
|
|
ai2_arc-659
|
multiple_choice
|
Which list gives the correct order of substances from the lowest melting point to the highest?
|
[
"oxygen, water, iron",
"water, iron, oxygen",
"oxygen, iron, water",
"iron, oxygen, water"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
Document 2:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Document 3:::
This is a list of analysis methods used in materials science. Analysis methods are listed by their acronym, if one exists.
Symbols
μSR – see muon spin spectroscopy
χ – see magnetic susceptibility
A
AAS – Atomic absorption spectroscopy
AED – Auger electron diffraction
AES – Auger electron spectroscopy
AFM – Atomic force microscopy
AFS – Atomic fluorescence spectroscopy
Analytical ultracentrifugation
APFIM – Atom probe field ion microscopy
APS – Appearance potential spectroscopy
ARPES – Angle resolved photoemission spectroscopy
ARUPS – Angle resolved ultraviolet photoemission spectroscopy
ATR – Attenuated total reflectance
B
BET – BET surface area measurement (BET from Brunauer, Emmett, Teller)
BiFC – Bimolecular fluorescence complementation
BKD – Backscatter Kikuchi diffraction, see EBSD
BRET – Bioluminescence resonance energy transfer
BSED – Back scattered electron diffraction, see EBSD
C
CAICISS – Coaxial impact collision ion scattering spectroscopy
CARS – Coherent anti-Stokes Raman spectroscopy
CBED – Convergent beam electron diffraction
CCM – Charge collection microscopy
CDI – Coherent diffraction imaging
CE – Capillary electrophoresis
CET – Cryo-electron tomography
CL – Cathodoluminescence
CLSM – Confocal laser scanning microscopy
COSY – Correlation spectroscopy
Cryo-EM – Cryo-electron microscopy
Cryo-SEM – Cryo-scanning electron microscopy
CV – Cyclic voltammetry
D
DE(T)A – Dielectric thermal analysis
dHvA – De Haas–van Alphen effect
DIC – Differential interference contrast microscopy
Dielectric spectroscopy
DLS – Dynamic light scattering
DLTS – Deep-level transient spectroscopy
DMA – Dynamic mechanical analysis
DPI – Dual polarisation interferometry
DRS – Diffuse reflection spectroscopy
DSC – Differential scanning calorimetry
DTA – Differential thermal analysis
DVS – Dynamic vapour sorption
E
EBIC – Electron beam induced current (see IBIC: ion beam induced charge)
EBS – Elastic (non-Rutherford) backscatterin
Document 4:::
While chemically pure materials have a single melting point, chemical mixtures often partially melt at the solidus temperature (TS or Tsol), and fully melt at the higher liquidus temperature (TL or Tliq). The solidus is always less than or equal to the liquidus, but they need not coincide. If a gap exists between the solidus and liquidus it is called the freezing range, and within that gap, the substance consists of a mixture of solid and liquid phases (like a slurry). Such is the case, for example, with the olivine (forsterite-fayalite) system, which is common in earth's mantle.
Definitions
In chemistry, materials science, and physics, the liquidus temperature specifies the temperature above which a material is completely liquid, and the maximum temperature at which crystals can co-exist with the melt in thermodynamic equilibrium. The solidus is the locus of temperatures (a curve on a phase diagram) below which a given substance is completely solid (crystallized). The solidus temperature, specifies the temperature below which a material is completely solid, and the minimum temperature at which a melt can co-exist with crystals in thermodynamic equilibrium.
Liquidus and solidus are mostly used for impure substances (mixtures) such as glasses, metal alloys, ceramics, rocks, and minerals. Lines of liquidus and solidus appear in the phase diagrams of binary solid solutions, as well as in eutectic systems away from the invariant point.
When distinction is irrelevant
For pure elements or compounds, e.g. pure copper, pure water, etc. the liquidus and solidus are at the same temperature, and the term melting point may be used.
There are also some mixtures which melt at a particular temperature, known as congruent melting. One example is eutectic mixture. In a eutectic system, there is particular mixing ratio where the solidus and liquidus temperatures coincide at a point known as the invariant point. At the invariant point, the mixture undergoes a eutectic reaction wh
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which list gives the correct order of substances from the lowest melting point to the highest?
A. oxygen, water, iron
B. water, iron, oxygen
C. oxygen, iron, water
D. iron, oxygen, water
Answer:
|
|
sciq-2126
|
multiple_choice
|
Intrusive igneous rocks cool from magma slowly in the crust and have large what?
|
[
"atoms",
"crystals",
"pores",
"coal deposits"
] |
B
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
In science and engineering the study of high pressure examines its effects on materials and the design and construction of devices, such as a diamond anvil cell, which can create high pressure. By high pressure is usually meant pressures of thousands (kilobars) or millions (megabars) of times atmospheric pressure (about 1 bar or 100,000 Pa).
History and overview
Percy Williams Bridgman received a Nobel Prize in 1946 for advancing this area of physics by two magnitudes of pressure (400 MPa to 40 GPa). The list of founding fathers of this field includes also the names of Harry George Drickamer, Tracy Hall, Francis P. Bundy, , and .
It was by applying high pressure as well as high temperature to carbon that man-made diamonds were first produced alongside many other interesting discoveries. Almost any material when subjected to high pressure will compact itself into a denser form, for example, quartz (also called silica or silicon dioxide) will first adopt a denser form known as coesite, then upon application of even higher pressure, form stishovite. These two forms of silica were first discovered by high-pressure experimenters, but then found in nature at the site of a meteor impact.
Chemical bonding is likely to change under high pressure, when the P*V term in the free energy becomes comparable to the energies of typical chemical bonds – i.e. at around 100 GPa. Among the most striking changes are metallization of oxygen at 96 GPa (rendering oxygen a superconductor), and transition of sodium from a nearly-free-electron metal to a transparent insulator at ~200 GPa. At ultimately high compression, however, all materials will metallize.
High-pressure experimentation has led to the discovery of the types of minerals which are believed to exist in the deep mantle of the Earth, such as silicate perovskite, which is thought to make up half of the Earth's bulk, and post-perovskite, which occurs at the core-mantle boundary and explains many anomalies inferred for that regio
Document 2:::
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle.
Transition to igneous rock
When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks.
Secondary changes
Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
Document 3:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 4:::
Blood Falls is an outflow of an iron oxide–tainted plume of saltwater, flowing from the tongue of Taylor Glacier onto the ice-covered surface of West Lake Bonney in the Taylor Valley of the McMurdo Dry Valleys in Victoria Land, East Antarctica.
Iron-rich hypersaline water sporadically emerges from small fissures in the ice cascades. The saltwater source is a subglacial pool of unknown size overlain by about of ice several kilometers from its tiny outlet at Blood Falls.
The reddish deposit was found in 1911 by the Australian geologist Thomas Griffith Taylor, who first explored the valley that bears his name. The Antarctica pioneers first attributed the red color to red algae, but later it was proven to be due to iron oxides.
Geochemistry
Poorly soluble hydrous ferric oxides are deposited at the surface of ice after the ferrous ions present in the unfrozen saltwater are oxidized in contact with atmospheric oxygen. The more soluble ferrous ions initially are dissolved in old seawater trapped in an ancient pocket remaining from the Antarctic Ocean when a fjord was isolated by the glacier in its progression during the Miocene period, some 5 million years ago, when the sea level was higher than today.
Unlike most Antarctic glaciers, the Taylor Glacier is not frozen to the bedrock, probably because of the presence of salts concentrated by the crystallization of the ancient seawater imprisoned below it. Salt cryo-concentration occurred in the deep relict seawater when pure ice crystallized and expelled its dissolved salts as it cooled down because of the heat exchange of the captive liquid seawater with the enormous ice mass of the glacier. As a consequence, the trapped seawater was concentrated in brines with a salinity two to three times that of the mean ocean water. A second mechanism sometimes also explaining the formation of hypersaline brines is the water evaporation of surface lakes directly exposed to the very dry polar atmosphere in the McMurdo Dry Valleys. Th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Intrusive igneous rocks cool from magma slowly in the crust and have large what?
A. atoms
B. crystals
C. pores
D. coal deposits
Answer:
|
|
sciq-6932
|
multiple_choice
|
The decay rate is measured in a unit called the what?
|
[
"decay rate",
"half-life",
"radioactive decay",
"exponential decay"
] |
B
|
Relavent Documents:
Document 0:::
Decay correction is a method of estimating the amount of radioactive decay at some set time before it was actually measured.
Example of use
Researchers often want to measure, say, medical compounds in the bodies of animals. It's hard to measure them directly, so it can be chemically joined to a radionuclide - by measuring the radioactivity, you can get a good idea of how the original medical compound is being processed.
Samples may be collected and counted at short time intervals (ex: 1 and 4 hours). But they might be tested for radioactivity all at once. Decay correction is one way of working out what the radioactivity would have been at the time it was taken, rather than at the time it was tested.
For example, the isotope copper-64, commonly used in medical research, has a half-life of 12.7 hours. If you inject a large group of animals at "time zero", but measure the radioactivity in their organs at two later times, the later groups must be "decay corrected" to adjust for the decay that has occurred between the two time points.
Mathematics
The formula for decay correcting is:
where is the original activity count at time zero, is the activity at time "t", "λ" is the decay constant, and "t" is the elapsed time.
The decay constant is where "" is the half-life of the radioactive material of interest.
Example
The decay correct might be used this way: a group of 20 animals is injected with a compound of interest on a Monday at 10:00 a.m. The compound is chemically joined to the isotope copper-64, which has a known half-life of 12.7 hours, or 764 minutes. After one hour, the 5 animals in the "one hour" group are killed, dissected, and organs of interest are placed in sealed containers to await measurement. This is repeated for another 5 animals, at 2 hours, and again at 4 hours. At this point, (say, 4:00 p.m., Monday) all the organs collected so far are measured for radioactivity (a proxy of the distribution of the compound of interest). The next day
Document 1:::
In the context of radioactivity, activity or total activity (symbol A) is a physical quantity defined as the number of radioactive transformations per second that occur in a particular radionuclide. The unit of activity is the becquerel (symbol Bq), which is defined equivalent to reciprocal seconds (symbol s-1). The older, non-SI unit of activity is the curie (Ci), which is radioactive decay per second. Another unit of activity is the rutherford, which is defined as radioactive decay per second.
Specific activity (symbol a) is the activity per unit mass of a radionuclide and is a physical property of that radionuclide.
It is usually given in units of becquerel per kilogram (Bq/kg), but another commonly used unit of specific activity is the curie per gram (Ci/g).
The specific activity should not be confused with level of exposure to ionizing radiation and thus the exposure or absorbed dose, which is the quantity important in assessing the effects of ionizing radiation on humans.
Since the probability of radioactive decay for a given radionuclide within a set time interval is fixed (with some slight exceptions, see changing decay rates), the number of decays that occur in a given time of a given mass (and hence a specific number of atoms) of that radionuclide is also a fixed (ignoring statistical fluctuations).
Formulation
Relationship between λ and T1/2
Radioactivity is expressed as the decay rate of a particular radionuclide with decay constant λ and the number of atoms N:
The integral solution is described by exponential decay:
where N0 is the initial quantity of atoms at time t = 0.
Half-life T1/2 is defined as the length of time for half of a given quantity of radioactive atoms to undergo radioactive decay:
Taking the natural logarithm of both sides, the half-life is given by
Conversely, the decay constant λ can be derived from the half-life T1/2 as
Calculation of specific activity
The mass of the radionuclide is given by
where M i
Document 2:::
The decay energy is the energy change of a nucleus having undergone a radioactive decay. Radioactive decay is the process in which an unstable atomic nucleus loses energy by emitting ionizing particles and radiation. This decay, or loss of energy, results in an atom of one type (called the parent nuclide) transforming to an atom of a different type (called the daughter nuclide).
Decay calculation
The energy difference of the reactants is often written as Q:
Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts):
Types of radioactive decay include
gamma ray
beta decay (decay energy is divided between the emitted electron and the neutrino which is emitted at the same time)
alpha decay
The decay energy is the mass difference Δm between the parent and the daughter atom and particles. It is equal to the energy of radiation E. If A is the radioactive activity, i.e. the number of transforming atoms per time, M the molar mass, then the radiation power P is:
or
or
Example: 60Co decays into 60Ni. The mass difference Δm is 0.003u. The radiated energy is approximately 2.8MeV. The molar weight is 59.93. The half life T of 5.27 year corresponds to the activity , where N is the number of atoms per mol, and T is the half-life. Taking care of the units the radiation power for 60Co is 17.9W/g
Radiation power in W/g for several isotopes:
60Co: 17.9
238Pu: 0.57
137Cs: 0.6
241Am: 0.1
210Po: 140 (T = 136d)
90Sr: 0.9
226Ra: 0.02
For use in radioisotope thermoelectric generators (RTGs) high decay energy combined with a long half life is desirable. To reduce the cost and weight of radiation shielding, sources that do not emit strong gamma radiation are preferred. This table gives an indication why - despite its enormous cost - with its roughly eighty year half life and low gamma emissions has become the RTG nuclide of choice. performs worse than on almost all measures, being shorter lived, a beta emitt
Document 3:::
ISO 31-10 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to nuclear reactions and ionizing radiations. It gives names and symbols for 70 quantities and units. Where appropriate, conversion factors are also given.
Its definitions include:
00031-10
Radioactivity quantities
Document 4:::
In nuclear science, the decay chain refers to a series of radioactive decays of different radioactive decay products as a sequential series of transformations. It is also known as a "radioactive cascade". The typical radioisotope does not decay directly to a stable state, but rather it decays to another radioisotope. Thus there is usually a series of decays until the atom has become a stable isotope, meaning that the nucleus of the atom has reached a stable state.
Decay stages are referred to by their relationship to previous or subsequent stages. A parent isotope is one that undergoes decay to form a daughter isotope. One example of this is uranium (atomic number 92) decaying into thorium (atomic number 90). The daughter isotope may be stable or it may decay to form a daughter isotope of its own. The daughter of a daughter isotope is sometimes called a granddaughter isotope. Note that the parent isotope becomes the daughter isotope, unlike in the case of a biological parent and daughter.
The time it takes for a single parent atom to decay to an atom of its daughter isotope can vary widely, not only between different parent-daughter pairs, but also randomly between identical pairings of parent and daughter isotopes. The decay of each single atom occurs spontaneously, and the decay of an initial population of identical atoms over time t, follows a decaying exponential distribution, e−λt, where λ is called a decay constant. One of the properties of an isotope is its half-life, the time by which half of an initial number of identical parent radioisotopes can be expected statistically to have decayed to their daughters, which is inversely related to λ. Half-lives have been determined in laboratories for many radioisotopes (or radionuclides). These can range from nearly instantaneous (less than 10−21 seconds) to more than 1019 years.
The intermediate stages each emit the same amount of radioactivity as the original radioisotope (i.e., there is a one-to-one relationsh
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The decay rate is measured in a unit called the what?
A. decay rate
B. half-life
C. radioactive decay
D. exponential decay
Answer:
|
|
sciq-5778
|
multiple_choice
|
What bone forms the upper jaw and supports the upper teeth?
|
[
"orbital bone",
"tibular bone",
"subaerial bone",
"maxillary bone"
] |
D
|
Relavent Documents:
Document 0:::
The maxilla (: maxillae ) in vertebrates is the upper fixed (not fixed in Neopterygii) bone of the jaw formed from the fusion of two maxillary bones. In humans, the upper jaw includes the hard palate in the front of the mouth.<ref>Merriam-Webster Online Dictionary.</ref> The two maxillary bones are fused at the intermaxillary suture, forming the anterior nasal spine. This is similar to the mandible (lower jaw), which is also a fusion of two mandibular bones at the mandibular symphysis. The mandible is the movable part of the jaw.
Anatomy
Structure
The maxilla is a paired bone - the two maxillae unite with each other at the intermaxillary suture. The maxilla consists of:
The body of the maxilla: pyramid-shaped; has an orbital, a nasal, an infratemporal, and a facial surface; contains the maxillary sinus.
Four processes:
the zygomatic process
the frontal process
the alveolar process
the palatine process
It has three surfaces:
the anterior, posterior, medial
Features of the maxilla include:
the infraorbital sulcus, canal, and foramen
the maxillary sinus
the incisive foramen
Articulations
Each maxilla articulates with nine bones: frontal, ethmoid, nasal, zygomatic, lacrimal, and palatine bones, the vomer, the inferior nasal concha, as well as the maxilla of the other side.
Sometimes it articulates with the orbital surface, and sometimes with the lateral pterygoid plate of the sphenoid.
Development
The maxilla is ossified in membrane. Mall and Fawcett maintain that it is ossified from two centers only, one for the maxilla proper and one for the premaxilla.
These centers appear during the sixth week of prenatal development and unite in the beginning of the third month, but the suture between the two portions persists on the palate until nearly middle life. Mall states that the frontal process is developed from both centers.
The maxillary sinus appears as a shallow groove on the nasal surface of the bone about the fourth month of development, but does
Document 1:::
Dental anatomy is a field of anatomy dedicated to the study of human tooth structures. The development, appearance, and classification of teeth fall within its purview. (The function of teeth as they contact one another falls elsewhere, under dental occlusion.) Tooth formation begins before birth, and the teeth's eventual morphology is dictated during this time. Dental anatomy is also a taxonomical science: it is concerned with the naming of teeth and the structures of which they are made, this information serving a practical purpose in dental treatment.
Usually, there are 20 primary ("baby") teeth and 32 permanent teeth, the last four being third molars or "wisdom teeth", each of which may or may not grow in. Among primary teeth, 10 usually are found in the maxilla (upper jaw) and the other 10 in the mandible (lower jaw). Among permanent teeth, 16 are found in the maxilla and the other 16 in the mandible. Each tooth has specific distinguishing features.
Growing of tooth
Tooth development is the complex process by which teeth form from embryonic cells, grow, and erupt into the mouth. Although many diverse species have teeth, non-human tooth development is largely the same as in humans. For human teeth to have a healthy oral environment, enamel, dentin, cementum, and the periodontium must all develop during appropriate stages of fetal development. Primary (baby) teeth start to form between the sixth and eighth weeks in utero, and permanent teeth begin to form in the twentieth week in utero. If teeth do not start to develop at or near these times, they will not develop at all.
A significant amount of research has focused on determining the processes that initiate tooth development. It is widely accepted that there is a factor within the tissues of the first branchial arch that is necessary for the development of teeth. The tooth bud (sometimes called the tooth germ) is an aggregation of cells that eventually forms a tooth and is organized into three parts: th
Document 2:::
In anatomy, Underwood's septa (or maxillary sinus septa, singular septum) are fin-shaped projections of bone that may exist in the maxillary sinus, first described in 1910 by Arthur S. Underwood, an anatomist at King's College in London. The presence of septa at or near the floor of the sinus are of interest to the dental clinician when proposing or performing sinus floor elevation procedures because of an increased likelihood of surgical complications, such as tearing of the Schneiderian membrane.
The prevalence of Underwood's septa in relation to the floor of the maxillary sinus has been reported at nearly 32%.
Location of septa in the sinus
Underwood divided the maxillary sinus into three regions relating to zones of distinct tooth eruption activity: anterior (corresponding to the premolars), middle (corresponding to the first molar) and posterior (corresponding to the second molar). Thus, he asserted, these septa always arise between teeth and never opposite the middle of a tooth.
Different studies reveal a different predisposition for the presence of septa based on sinus region:
Anterior: Ulm, et al., Krennmair et al.
Middle: Velásquez-Plata et al., Kim et al. and González-Santana et al.
Posterior: Underwood
Primary vs. secondary septa
Recent studies have classified two types of maxillary sinus septa: primary and secondary. Primary septa are those initially described by Underwood and that form as a result of the floor of the sinus sinking along with the roots of erupting teeth; these primary septa are thus generally found in the sinus corresponding to the space between teeth, as explained by Underwood. Conversely, secondary septa form as a result of irregular pneumatization of the sinus following loss of maxillary posterior teeth. Sinus pneumatization is a poorly understood phenomenon that results in an increased volume of the maxillary sinus, generally following maxillary posterior tooth loss, at the expense of the bone which used to house the root
Document 3:::
Changes to the dental morphology and jaw are major elements of hominid evolution. These changes were driven by the types and processing of food eaten. The evolution of the jaw is thought to have facilitated encephalization, speech, and the formation of the uniquely human chin.
Background
Today, humans possess 32 permanent teeth with a dental formula of . This breaks down to two pairs of incisors, one pair of canines, two pairs of premolars, and three pairs of molars on each jaw. In modern day humans, incisors are generally spatulate with a single root while canines are also single rooted but are single cusped and conical. Premolars are bicuspid while molars are multi-cuspid. The upper molars have three roots while the lower molars have two roots.
General patterns of dental morphological evolution throughout human evolution include a reduction in facial prognathism, the presence of a Y5 cusp pattern, the formation of a parabolic palate and the loss of the diastema.
Human teeth are made of dentin and are covered by enamel in the areas that are exposed. Enamel, itself, is composed of hydroxyapatite, a calcium phosphate crystal. The various types of human teeth perform different functions. Incisors are used to cut food, canines are used to tear food, and the premolars and molars are used to crush and grind food.
History
Hominidae
Chimpanzees
According to the theory of evolution, humans evolved from a common ancestor of chimpanzees. Researchers hypothesize that the earliest hominid ancestor would have similar dental morphology to chimpanzees today. Thus, comparisons between chimpanzees and Homo sapiens could be used to identify major differences. Major characterizing features of Pan troglodytes dental morphology include the presence of peripherally located cusps, thin enamel, and strong facial prognathism.
Earliest Hominids
Sahelanthropus tchadensis
Sahelanthropus tchadensis is thought to be one of the earliest species belonging to the human lineage. Fossil
Document 4:::
Human teeth function to mechanically break down items of food by cutting and crushing them in preparation for swallowing and digesting. As such, they are considered part of the human digestive system. Humans have four types of teeth: incisors, canines, premolars, and molars, which each have a specific function. The incisors cut the food, the canines tear the food and the molars and premolars crush the food. The roots of teeth are embedded in the maxilla (upper jaw) or the mandible (lower jaw) and are covered by gums. Teeth are made of multiple tissues of varying density and hardness.
Humans, like most other mammals, are diphyodont, meaning that they develop two sets of teeth. The first set, deciduous teeth, also called "primary teeth", "baby teeth", or "milk teeth", normally eventually contains 20 teeth. Primary teeth typically start to appear ("erupt") around six months of age and this may be distracting and/or painful for the infant. However, some babies are born with one or more visible teeth, known as neonatal teeth or "natal teeth".
Anatomy
Dental anatomy is a field of anatomy dedicated to the study of tooth structure. The development, appearance, and classification of teeth fall within its field of study, though dental occlusion, or contact between teeth, does not. Dental anatomy is also a taxonomic science as it is concerned with the naming of teeth and their structures. This information serves a practical purpose for dentists, enabling them to easily identify and describe teeth and structures during treatment.
The anatomic crown of a tooth is the area covered in enamel above the cementoenamel junction (CEJ) or "neck" of the tooth. Most of the crown is composed of dentin ("dentine" in British English) with the pulp chamber inside. The crown is within bone before eruption. After eruption, it is almost always visible. The anatomic root is found below the CEJ and is covered with cementum. As with the crown, dentin composes most of the root, which normally h
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What bone forms the upper jaw and supports the upper teeth?
A. orbital bone
B. tibular bone
C. subaerial bone
D. maxillary bone
Answer:
|
|
sciq-8823
|
multiple_choice
|
How many valence electrons does helium have?
|
[
"three",
"Five",
"two",
"six"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron.
The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell.
An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond.
Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.
Overview
Electron configuration
The electrons that determine valence – how an atom reacts chemically – are those with the highest energy.
For a main-group element, the valence electrons are defined as those electrons residing in the e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many valence electrons does helium have?
A. three
B. Five
C. two
D. six
Answer:
|
|
sciq-10992
|
multiple_choice
|
What is the term for flowering seed plants?
|
[
"angiosperms",
"spores",
"perennials",
"gymnosperms"
] |
A
|
Relavent Documents:
Document 0:::
Mesangiospermae (core angiosperms) is a clade of flowering plants (angiosperms), informally called "mesangiosperms". They are one of two main groups of angiosperms. It is a name created under the rules of the PhyloCode system of phylogenetic nomenclature. There are about 350,000 species of mesangiosperms. The mesangiosperms contain about 99.95% of the flowering plants, assuming that there are about 175 species not in this group and about 350,000 that are. While such a clade with a similar circumscription exists in the APG III system, it was not given a name.
Phylogeny
Besides the mesangiosperms, the other groups of flowering plants are Amborellales, Nymphaeales, and Austrobaileyales. These constitute a paraphyletic grade called basal angiosperms. The order names, ending in -ales are used here without reference to taxonomic rank because these groups contain only one order.
Mesangiospermae includes the following clades:
Ceratophyllales
Chloranthales
eudicots
magnoliidae
monocots
Name
The mesangiosperms are usually recognized in classification systems that do not assign groups to taxonomic rank. The name Mesangiospermae is a branch-modified node-based name in phylogenetic nomenclature. It is defined as the most inclusive crown clade containing Platanus occidentalis, but not Amborella trichopoda, Nymphaea odorata, or Austrobaileya scandens. It is sometimes written as /Mesangiospermae even though this is not required by the PhyloCode. The "clademark" slash indicates that the term is intended as phylogenetically defined.
Description
In molecular phylogenetic studies, the mesangiosperms are always strongly supported as a monophyletic group. There is no distinguishing characteristic which is found in all mature mesangiosperms but which is not found in any of the basal angiosperms. Nevertheless, the mesangiosperms are recognizable in the earliest stage of embryonic development. The ovule contains a megagametophyte, also known as an embryo sac, that is bipolar in
Document 1:::
The gymnosperms ( lit. revealed seeds) are a group of seed-producing plants that includes conifers, cycads, Ginkgo, and gnetophytes, forming the clade Gymnospermae. The term gymnosperm comes from the composite word in ( and ), literally meaning 'naked seeds'. The name is based on the unenclosed condition of their seeds (called ovules in their unfertilized state). The non-encased condition of their seeds contrasts with the seeds and ovules of flowering plants (angiosperms), which are enclosed within an ovary. Gymnosperm seeds develop either on the surface of scales or leaves, which are often modified to form cones, or on their own as in yew, Torreya, Ginkgo. Gymnosperm lifecycles involve alternation of generations. They have a dominant diploid sporophyte phase and a reduced haploid gametophyte phase which is dependent on the sporophytic phase. The term "gymnosperm" is often used in paleobotany to refer to (the paraphyletic group of) all non-angiosperm seed plants. In that case, to specify the modern monophyletic group of gymnosperms, the term Acrogymnospermae is sometimes used.
The gymnosperms and angiosperms together comprise the spermatophytes or seed plants. The gymnosperms are subdivided into five Divisions, four of which, the Cycadophyta, Ginkgophyta, Gnetophyta, and Pinophyta (also known as Coniferophyta) are still in existence while the Pteridospermatophyta are now extinct. Newer classification place the gnetophytes among the conifers.
By far the largest group of living gymnosperms are the conifers (pines, cypresses, and relatives), followed by cycads, gnetophytes (Gnetum, Ephedra and Welwitschia), and Ginkgo biloba (a single living species). About 65% of gymnosperms are dioecious, but conifers are almost all monoecious.
Document 2:::
The fossil history of flowering plants records the development of flowers and other distinctive structures of the angiosperms, now the dominant group of plants on land. The history is controversial as flowering plants appear in great diversity in the Cretaceous, with scanty and debatable records before that, creating a puzzle for evolutionary biologists that Charles Darwin named an "abominable mystery".
Paleozoic
Fossilised spores suggest that land plants (embryophytes) have existed for at least 475 million years. Early land plants reproduced sexually with flagellated, swimming sperm, like the green algae from which they evolved. An adaptation to terrestrial life was the development of upright sporangia for dispersal by spores to new habitats. This feature is lacking in the descendants of their nearest algal relatives, the Charophycean green algae. A later terrestrial adaptation took place with retention of the delicate, avascular sexual stage, the gametophyte, within the tissues of the vascular sporophyte. This occurred by spore germination within sporangia rather than spore release, as in non-seed plants. A current example of how this might have happened can be seen in the precocious spore germination in Selaginella, the spike-moss. The result for the ancestors of angiosperms and gymnosperms was enclosing the female gamete in a case, the seed.
The first seed-bearing plants were gymnosperms, like the ginkgo, and conifers (such as pines and firs). These did not produce flowers. The pollen grains (male gametophytes) of Ginkgo and cycads produce a pair of flagellated, mobile sperm cells that "swim" down the developing pollen tube to the female and her eggs.
Angiosperms appear suddenly and in great diversity in the fossil record in the Early Cretaceous. This poses such a problem for the theory of gradual evolution that Charles Darwin called it an "abominable mystery". Several groups of extinct gymnosperms, in particular seed ferns, have been proposed as the ancest
Document 3:::
Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. It is usually synonymous with the Flora and can be contrasted with the microflora, a term used for all the bacteria and other microorganisms in an ecosystem.
Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. This is in contrast to the flora, which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora.
Document 4:::
Agrostology (from Greek , agrōstis, "type of grass"; and , -logia), sometimes graminology, is the scientific study of the grasses (the family Poaceae, or Gramineae). The grasslike species of the sedge family (Cyperaceae), the rush family (Juncaceae), and the bulrush or cattail family (Typhaceae) are often included with the true grasses in the category of graminoid, although strictly speaking these are not included within the study of agrostology. In contrast to the word graminoid, the words gramineous and graminaceous are normally used to mean "of, or relating to, the true grasses (Poaceae)".
Agrostology has importance in the maintenance of wild and grazed grasslands, agriculture (crop plants such as rice, maize, sugarcane, and wheat are grasses, and many types of animal fodder are grasses), urban and environmental horticulture, turfgrass management and sod production, ecology, and conservation.
Botanists that made important contributions to agrostology include:
Jean Bosser
Aimée Antoinette Camus
Mary Agnes Chase
Eduard Hackel
Charles Edward Hubbard
A. S. Hitchcock
Ernst Gottlieb von Steudel
Otto Stapf
Joseph Dalton Hooker
Norman Loftus Bor
Jan-Frits Veldkamp
William Derek Clayton
Robert B Shaw
Thomas Arthur Cope
Grasses
Agrostology
01
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for flowering seed plants?
A. angiosperms
B. spores
C. perennials
D. gymnosperms
Answer:
|
|
sciq-10645
|
multiple_choice
|
What kind of reactions absorb energy from their surroundings as they occur?
|
[
"endothermic",
"hydrostatic",
"autotrophic",
"exothermic"
] |
A
|
Relavent Documents:
Document 0:::
Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, wherein water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient
Document 1:::
A reversible reaction is a reaction in which the conversion of reactants to products and the conversion of products to reactants occur simultaneously.
\mathit aA{} + \mathit bB <=> \mathit cC{} + \mathit dD
A and B can react to form C and D or, in the reverse reaction, C and D can react to form A and B. This is distinct from a reversible process in thermodynamics.
Weak acids and bases undergo reversible reactions. For example, carbonic acid:
H2CO3 (l) + H2O(l) ⇌ HCO3−(aq) + H3O+(aq).
The concentrations of reactants and products in an equilibrium mixture are determined by the analytical concentrations of the reagents (A and B or C and D) and the equilibrium constant, K. The magnitude of the equilibrium constant depends on the Gibbs free energy change for the reaction. So, when the free energy change is large (more than about 30 kJ mol−1), the equilibrium constant is large (log K > 3) and the concentrations of the reactants at equilibrium are very small. Such a reaction is sometimes considered to be an irreversible reaction, although small amounts of the reactants are still expected to be present in the reacting system. A truly irreversible chemical reaction is usually achieved when one of the products exits the reacting system, for example, as does carbon dioxide (volatile) in the reaction
CaCO3 + 2HCl → CaCl2 + H2O + CO2↑
History
The concept of a reversible reaction was introduced by Claude Louis Berthollet in 1803, after he had observed the formation of sodium carbonate crystals at the edge of a salt lake (one of the natron lakes in Egypt, in limestone):
2NaCl + CaCO3 → Na2CO3 + CaCl2
He recognized this as the reverse of the familiar reaction
Na2CO3 + CaCl2→ 2NaCl + CaCO3
Until then, chemical reactions were thought to always proceed in one direction. Berthollet reasoned that the excess of salt in the lake helped push the "reverse" reaction towards the formation of sodium carbonate.
In 1864, Peter Waage and Cato Maximilian Guldberg formulated their
Document 2:::
The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission.
Design intent
The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example).
In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology.
General characteristics
When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could
Document 3:::
In chemistry and particularly biochemistry, an energy-rich species (usually energy-rich molecule) or high-energy species (usually high-energy molecule) is a chemical species which reacts, potentially with other species found in the environment, to release chemical energy.
In particular, the term is often used for:
adenosine triphosphate (ATP) and similar molecules called high-energy phosphates, which release inorganic phosphate into the environment in an exothermic reaction with water:
ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol)
fuels such as hydrocarbons, carbohydrates, lipids, proteins, and other organic molecules which react with oxygen in the environment to ultimately form carbon dioxide, water, and sometimes nitrogen, sulfates, and phosphates
molecular hydrogen
monatomic oxygen, ozone, hydrogen peroxide, singlet oxygen and other metastable or unstable species which spontaneously react without further reactants
in particular, the vast majority of free radicals
explosives such as nitroglycerin and other substances which react exothermically without requiring a second reactant
metals or metal ions which can be oxidized to release energy
This is contrasted to species that are either part of the environment (this sometimes includes diatomic triplet oxygen) or do not react with the environment (such as many metal oxides or calcium carbonate); those species are not considered energy-rich or high-energy species.
Alternative definitions
The term is often used without a definition. Some authors define the term "high-energy" to be equivalent to "chemically unstable", while others reserve the term for high-energy phosphates, such as the Great Soviet Encyclopedia which defines the term "high-energy compounds" to refer exclusively to those.
The IUPAC glossary of terms used in ecotoxicology defines a primary producer as an "organism capable of using the energy derived from light or a chemical substance in order to manufacture energy-rich organic compou
Document 4:::
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules.
It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance.
It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics.
See also
Physical chemistry
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of reactions absorb energy from their surroundings as they occur?
A. endothermic
B. hydrostatic
C. autotrophic
D. exothermic
Answer:
|
|
scienceQA-921
|
multiple_choice
|
Select the vertebrate.
|
[
"red-kneed tarantula",
"giant octopus",
"red-tailed hawk",
"castor bean tick"
] |
C
|
Like other tarantulas, a red-kneed tarantula is an invertebrate. It does not have a backbone. It has an exoskeleton.
Like other octopuses, a giant octopus is an invertebrate. It does not have a backbone. It has a soft body.
A castor bean tick is an insect. Like other insects, a castor bean tick is an invertebrate. It does not have a backbone. It has an exoskeleton.
A red-tailed hawk is a bird. Like other birds, a red-tailed hawk is a vertebrate. It has a backbone.
|
Relavent Documents:
Document 0:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
Document 1:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 2:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 3:::
History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC.
Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal.
The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers.
The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject.
Context
Aristotle (384–322 BC) studied at Plat
Document 4:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the vertebrate.
A. red-kneed tarantula
B. giant octopus
C. red-tailed hawk
D. castor bean tick
Answer:
|
ai2_arc-403
|
multiple_choice
|
The attachment of methyl radicals to genes helps regulate which property?
|
[
"information genes store",
"mode of gene inheritance",
"gene expression",
"gene coding system"
] |
D
|
Relavent Documents:
Document 0:::
1-Methylcytosine is a methylated form of the DNA base cytosine.
In 1-methylcytosine, a methyl group is attached to the 1st atom in the 6-atom ring. This methyl group distinguishes 1-methylcytosine from cytosine.
History
Miriam Rossi worked on the refinement of 1-methylcytosine.
1-Methylcytosine is used as a nucleobase of hachimoji DNA, in which it pairs with isoguanine.
Document 1:::
CheR proteins are part of the chemotaxis signaling mechanism which methylates the chemotaxis receptor at specific glutamate residues. Methyl transfer from the ubiquitous S-adenosyl-L-methionine (AdoMet/SAM) to either nitrogen, oxygen or carbon atoms is frequently employed in diverse organisms ranging from bacteria to plants and mammals. The reaction is catalysed by methyltransferases (Mtases) and modifies DNA, RNA, proteins and small molecules, such as catechol for regulatory purposes. The various aspects of the role of DNA methylation in prokaryotic restriction-modification systems and in a number of cellular processes i
Document 2:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 3:::
Nutritional epigenetics is a science that studies the effects of nutrition on gene expression and chromatin accessibility. It is a subcategory of nutritional genomics that focuses on the effects of bioactive food components on epigenetic events.
History
Changes to children’s genetic profiles caused by fetal nutrition have been observed as early as the Dutch famine of 1944-1945. Due to malnutrition in pregnant mothers, children born during this famine were more likely to exhibit health issues such as heart disease, obesity, schizophrenia, depression, and addiction.
Biologists Randy Jirtle and Robert A. Waterland became early pioneers of nutritional epigenetics after publishing their research on the effects of a pregnant mother’s diet on her offspring’s gene functions in the research journal Molecular and Cellular Biology in 2003.
Research
Researchers in nutritional epigenetics study the interaction between molecules in food and molecules that control gene expression, which leads to areas of focus such as dietary methyl groups and DNA methylation. Nutrients and bioactive food components affect epigenetics by inhibiting enzymatic activity related to DNA methylation and histone modifications. Because methyl groups are used for suppression of undesirable genes, a mother’s level of dietary methyl consumption can significantly alter her child’s gene expression, especially during early development. Furthermore, nutrition can affect methylation as the process continues throughout an individual’s adult life. Because of this, nutritional epigeneticists have studied food as a form of molecular exposure.
Bioactive food components that influence epigenetic processes range from vitamins such as A, B6, and B12 to alcohol and elements such as arsenic, cadmium, and selenium. Dietary methyl supplements such as extra folic acid and choline can also have adverse effects on epigenetic gene regulation.
Researchers have considered dietary exposure to heavy metals such as mercury and
Document 4:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The attachment of methyl radicals to genes helps regulate which property?
A. information genes store
B. mode of gene inheritance
C. gene expression
D. gene coding system
Answer:
|
|
sciq-222
|
multiple_choice
|
How many naturally occurring elements are known on earth?
|
[
"60",
"90",
"87",
"85"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many naturally occurring elements are known on earth?
A. 60
B. 90
C. 87
D. 85
Answer:
|
|
sciq-2339
|
multiple_choice
|
The structure of mitochondrion plays an important role in what?
|
[
"magnetism",
"aerobic respiration",
"cell division",
"sexual reproduction"
] |
B
|
Relavent Documents:
Document 0:::
Megamitochondria is extremely large and abnormal shapes of mitochondria seen in hepatocytes in alcoholic liver disease and in nutritional deficiencies. It can be seen in conditions of hypertrophy in cell death.
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
The inner mitochondrial membrane (IMM) is the mitochondrial membrane which separates the mitochondrial matrix from the intermembrane space.
Structure
The structure of the inner mitochondrial membrane is extensively folded and compartmentalized. The numerous invaginations of the membrane are called cristae, separated by crista junctions from the inner boundary membrane juxtaposed to the outer membrane. Cristae significantly increase the total membrane surface area compared to a smooth inner membrane and thereby the available working space for oxidative phosphorylation.
The inner membrane creates two compartments. The region between the inner and outer membrane, called the intermembrane space, is largely continuous with the cytosol, while the more sequestered space inside the inner membrane is called the matrix.
Cristae
For typical liver mitochondria, the area of the inner membrane is about 5 times as large as the outer membrane due to cristae. This ratio is variable and mitochondria from cells that have a greater demand for ATP, such as muscle cells, contain even more cristae. Cristae membranes are studded on the matrix side with small round protein complexes known as F1 particles, the site of proton-gradient driven ATP synthesis. Cristae affect overall chemiosmotic function of mitochondria.
Cristae junctions
Cristae and the inner boundary membranes are separated by junctions. The end of cristae are partially closed by transmembrane protein complexes that bind head to head and link opposing crista membranes in a bottleneck-like fashion. For example, deletion of the junction protein IMMT leads to a reduced inner membrane potential and impaired growth and to dramatically aberrant inner membrane structures which form concentric stacks instead of the typical invaginations.
Composition
The inner membrane of mitochondria is similar in lipid composition to the membrane of bacteria. This phenomenon can be explained by the endosymbiont hypothesis of the origin of mito
Document 3:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 4:::
In the mitochondrion, the matrix is the space within the inner membrane. The word "matrix" stems from the fact that this space is viscous, compared to the relatively aqueous cytoplasm. The mitochondrial matrix contains the mitochondrial DNA, ribosomes, soluble enzymes, small organic molecules, nucleotide cofactors, and inorganic ions.[1] The enzymes in the matrix facilitate reactions responsible for the production of ATP, such as the citric acid cycle, oxidative phosphorylation, oxidation of pyruvate, and the beta oxidation of fatty acids.
The composition of the matrix based on its structures and contents produce an environment that allows the anabolic and catabolic pathways to proceed favorably. The electron transport chain and enzymes in the matrix play a large role in the citric acid cycle and oxidative phosphorylation. The citric acid cycle produces NADH and FADH2 through oxidation that will be reduced in oxidative phosphorylation to produce ATP.
The cytosolic, intermembrane space, compartment has a higher aqueous:protein content of around 3.8 μL/mg protein relative to that occurring in mitochondrial matrix where such levels typically are near 0.8 μL/mg protein. It is not known how mitochondria maintain osmotic balance across the inner mitochondrial membrane, although the membrane contains aquaporins that are believed to be conduits for regulated water transport. Mitochondrial matrix has a pH of about 7.8, which is higher than the pH of the intermembrane space of the mitochondria, which is around 7.0–7.4. Mitochondrial DNA was discovered by Nash and Margit in 1963. One to many double stranded mainly circular DNA is present in mitochondrial matrix. Mitochondrial DNA is 1% of total DNA of a cell. It is rich in guanine and cytosine content, and in humans is maternally derived. Mitochondria of mammals have 55s ribosomes.
Composition
Metabolites
The matrix is host to a wide variety of metabolites involved in processes within the matrix. The citric acid cycle inv
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The structure of mitochondrion plays an important role in what?
A. magnetism
B. aerobic respiration
C. cell division
D. sexual reproduction
Answer:
|
|
sciq-497
|
multiple_choice
|
What is the study of the similarities and differences in the embryos of different species?
|
[
"example embryology",
"prenatal biology",
"diversified embryology",
"comparative embryology"
] |
D
|
Relavent Documents:
Document 0:::
Comparative embryology is the branch of embryology that compares and contrasts embryos of different species, showing how all animals are related.
History
Aristotle was the earliest person in recorded history to study embryos. Observing embryos of different species, he described how animals born in eggs (oviparously) and by live birth (viviparously) developed differently. He discovered there were two main ways the egg cell divided: holoblastically, where the whole egg divided and became the creature; and meroblastically, where only part of the egg became the creature. Further advances in comparative embryology did not come until the invention of the microscope. Since then, many people, from Ernst Haeckel to Charles Darwin, have contributed to the field.
Misconceptions
Many erroneous theories were formed in the early years of comparative embryology. For example, German biologist and philosopher Ernst Haeckel proposed that all organisms went through a "re-run" of evolution he said that 'ontogeny repeats phylogeny' while in development. Haeckel believed that to become a mammal, an embryo had to begin as a single-celled organism, then evolve into a fish, then an amphibian, a reptile, and finally a mammal. The theory was widely accepted, then disproved many years later.
Objectives
The field of comparative embryology aims to understand how embryos develop, and to research the inter-relatedness of animals. It has bolstered evolutionary theory by demonstrating that all vertebrates develop similarly and have a putative common ancestor.
See also
Embryology
Document 1:::
Embryomics is the identification, characterization and study of the diverse cell types which arise during embryogenesis, especially as this relates to the location and developmental history of cells in the embryo. Cell type may be determined according to several criteria: location in the developing embryo, gene expression as indicated by protein and nucleic acid markers and surface antigens, and also position on the embryogenic tree.
Embryome
There are many cell markers useful in distinguishing, classifying, separating and purifying the numerous cell types present at any given time in a developing organism. These cell markers consist of select RNAs and proteins present inside, and surface antigens present on the surface of, the cells making up the embryo. For any given cell type, these RNA and protein markers reflect the genes characteristically active in that cell type. The catalog of all these cell types and their characteristic markers is known as the organism's embryome. The word is a portmanteau of embryo and genome. “Embryome” may also refer to the totality of the physical cell markers themselves.
Embryogenesis
As an embryo develops from a fertilized egg, the single egg cell splits into many cells, which grow in number and migrate to the appropriate locations inside the embryo at appropriate times during development. As the embryo's cells grow in number and migrate, they also differentiate into an increasing number of different cell types, ultimately turning into the stable, specialized cell types characteristic of the adult organism. Each of the cells in an embryo contains the same genome, characteristic of the species, but the level of activity of each of the many thousands of genes that make up the complete genome varies with, and determines, a particular cell's type (e.g. neuron, bone cell, skin cell, muscle cell, etc.).
During embryo development (embryogenesis), many cell types are present which are not present in the adult organism. These temporary c
Document 2:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 3:::
This glossary of developmental biology is a list of definitions of terms and concepts commonly used in the study of developmental biology and related disciplines in biology, including embryology and reproductive biology, primarily as they pertain to vertebrate animals and particularly to humans and other mammals. The developmental biology of invertebrates, plants, fungi, and other organisms is treated in other articles; e.g. terms relating to the reproduction and development of insects are listed in Glossary of entomology, and those relating to plants are listed in Glossary of botany.
This glossary is intended as introductory material for novices; for more specific and technical detail, see the article corresponding to each term. Additional terms relevant to vertebrate reproduction and development may also be found in Glossary of biology, Glossary of cell biology, Glossary of genetics, and Glossary of evolutionary biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Introduction to developmental biology
Outline of developmental biology
Outline of cell biology
Glossary of biology
Glossary of cell biology
Glossary of genetics
Glossary of evolutionary biology
Document 4:::
Mammalian embryogenesis is the process of cell division and cellular differentiation during early prenatal development which leads to the development of a mammalian embryo.
Difference from embryogenesis of lower chordates
Due to the fact that placental mammals and marsupials nourish their developing embryos via the placenta, the ovum in these species does not contain significant amounts of yolk, and the yolk sac in the embryo is relatively small in size, in comparison with both the size of the embryo itself and the size of yolk sac in embryos of comparable developmental age from lower chordates. The fact that an embryo in both placental mammals and marsupials undergoes the process of implantation, and forms the chorion with its chorionic villi, and later the placenta and umbilical cord, is also a difference from lower chordates.
The difference between a mammalian embryo and an embryo of a lower chordate animal is evident starting from blastula stage. Due to that fact, the developing mammalian embryo at this stage is called a blastocyst, not a blastula, which is more generic term.
There are also several other differences from embryogenesis in lower chordates. One such difference is that in mammalian embryos development of the central nervous system and especially the brain tends to begin at earlier stages of embryonic development and to yield more structurally advanced brain at each stage, in comparison with lower chordates. The evolutionary reason for such a change likely was that the advanced and structurally complex brain, characteristic of mammals, requires more time to develop, but the maximum time spent in utero is limited by other factors, such as relative size of the final fetus to the mother (ability of the fetus to pass mother's genital tract to be born), limited resources for the mother to nourish herself and her fetus, etc. Thus, to develop such a complex and advanced brain in the end, the mammalian embryo needed to start this process earlier and to
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the study of the similarities and differences in the embryos of different species?
A. example embryology
B. prenatal biology
C. diversified embryology
D. comparative embryology
Answer:
|
|
sciq-6596
|
multiple_choice
|
Metals are good conductors of what?
|
[
"metabolism",
"electricity",
"light",
"sound"
] |
B
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Metals are good conductors of what?
A. metabolism
B. electricity
C. light
D. sound
Answer:
|
|
sciq-8249
|
multiple_choice
|
The purpose of any cooling system is to transfer what type of energy in order to keep things cool?
|
[
"thermal",
"radiation",
"physical",
"atmospheric"
] |
A
|
Relavent Documents:
Document 0:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 1:::
Thermal engineering is a specialized sub-discipline of mechanical engineering that deals with the movement of heat energy and transfer. The energy can be transferred between two mediums or transformed into other forms of energy. A thermal engineer will have knowledge of thermodynamics and the process to convert generated energy from thermal sources into chemical, mechanical, or electrical energy. Many process plants use a wide variety of machines that utilize components that use heat transfer in some way. Many plants use heat exchangers in their operations. A thermal engineer must allow the proper amount of energy to be transferred for correct use. Too much and the components could fail, too little and the system will not function at all. Thermal engineers must have an understanding of economics and the components that they will be servicing or interacting with. Some components that a thermal engineer could work with include heat exchangers, heat sinks, bi-metals strips, radiators and many more. Some systems that require a thermal engineer include; Boilers, heat pumps, water pumps, engines, and more.
Part of being a thermal engineer is to improve a current system and make it more efficient than the current system. Many industries employ thermal engineers, some main ones are the automotive manufacturing industry, commercial construction, and Heating Ventilation and Cooling industry. Job opportunities for a thermal engineer are very broad and promising.
Thermal engineering may be practiced by mechanical engineers and chemical engineers.
One or more of the following disciplines may be involved in solving a particular thermal engineering problem: Thermodynamics, Fluid mechanics, Heat transfer, or
Mass transfer.
One branch of knowledge used frequently in thermal engineering is that of thermofluids.
Applications
Boiler design
Combustion engines
Cooling systems
Cooling of computer chips
Heat exchangers
HVAC
Process Fired Heaters
Refrigeration Systems
Compressed Air Sy
Document 2:::
Chilled water is a commodity often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. The chilled water can be supplied by a vendor, such as a public utility, or created at the location of the building that will use it, which has been the norm.
Use
Chilled water cooling is not very different from typical residential air conditioning where water is pumped from the chiller to the air handler unit to cool the air.
Regardless of who provides it, the chilled water (between 4 and 7 °C (39-45 °F)) is pumped through an air handler, which captures the heat from the air, then disperses the air throughout the area to be cooled.
Site generated
As part of a chilled water system, the condenser water absorbs heat from the refrigerant in the condenser barrel of the water chiller and is then sent via return lines to a cooling tower, which is a heat exchange device used to transfer waste heat to the atmosphere. The extent to which the cooling tower decreases the temperature depends upon the outside temperature, the relative humidity and the atmospheric pressure. The water in the chilled water circuit will be lowered to the Wet-bulb temperature or dry-bulb temperature before proceeding to the water chiller, where it is cooled to between 4 and 7 °C and pumped to the air handler, where the cycle is repeated. The equipment required includes chillers, cooling towers, pumps and electrical control equipment. The initial capital outlay for these is substantial and maintenance costs can fluctuate. Adequate space must be included in building design for the physical plant and access to equipment.
Utility generated
The chilled water, having absorbed heat from the air, is sent via return lines back to the utility facility, where the process described in the previous section occurs. Utility generated chilled water eliminates the need for chillers and cooling towers at the property, reduces capital
Document 3:::
In fluid thermodynamics, a heat transfer fluid is a gas or liquid that takes part in heat transfer by serving as an intermediary in cooling on one side of a process, transporting and storing thermal energy, and heating on another side of a process. Heat transfer fluids are used in countless applications and industrial processes requiring heating or cooling, typically in a closed circuit and in continuous cycles. Cooling water, for instance, cools an engine, while heating water in a hydronic heating system heats the radiator in a room.
Water is the most common heat transfer fluid because of its economy, high heat capacity and favorable transport properties. However, the useful temperature range is restricted by freezing below 0 °C and boiling at elevated temperatures depending on the system pressure. Antifreeze additives can alleviate the freezing problem to some extent. However, many other heat transfer fluids have been developed and used in a huge variety of applications. For higher temperatures, oil or synthetic hydrocarbon- or silicone-based fluids offer lower vapor pressure. Molten salts and molten metals can be used for transferring and storing heat at temperatures above 300 to 400 °C where organic fluids start to decompose. Gases such as water vapor, nitrogen, argon, helium and hydrogen have been used as heat transfer fluids where liquids are not suitable. For gases the pressure typically needs to be elevated to facilitate higher flow rates with low pumping power.
In order to prevent overheating, fluid flows inside a system or a device so as to transfer the heat outside that particular device or system.
They generally have a high boiling point and a high heat capacity. High boiling point prevents the heat transfer liquids from vaporising at high temperatures. High heat capacity enables a small amount of the refrigerant to transfer a large amount of heat very efficiently.
It must be ensured that the heat transfer liquids used should not have a low boiling p
Document 4:::
A continuous cooling transformation (CCT) phase diagram is often used when heat treating steel. These diagrams are used to represent which types of phase changes will occur in a material as it is cooled at different rates. These diagrams are often more useful than time-temperature-transformation diagrams because it is more convenient to cool materials at a certain rate (temperature-variable cooling), than to cool quickly and hold at a certain temperature (isothermal cooling).
Types of continuous cooling diagrams
There are two types of continuous cooling diagrams drawn for practical purposes.
Type 1: This is the plot beginning with the transformation start point, cooling with a specific transformation fraction and ending with a transformation finish temperature for all products against transformation time for each cooling curve.
Type 2: This is the plot beginning with the transformation start point, cooling with specific transformation fraction and ending with a transformation finish temperature for all products against cooling rate or bar diameter of the specimen for each type of cooling medium..
See also
Isothermal transformation
Phase diagram
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The purpose of any cooling system is to transfer what type of energy in order to keep things cool?
A. thermal
B. radiation
C. physical
D. atmospheric
Answer:
|
|
sciq-7919
|
multiple_choice
|
A major disturbance in what results in an episode of severe weather called a storm?
|
[
"the core",
"the ozone layer",
"the oceans",
"the atmosphere"
] |
D
|
Relavent Documents:
Document 0:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 1:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
The Oklahoma Mesonet is a network of environmental monitoring stations designed to measure the environment at the size and duration of mesoscale weather events. The phrase "mesonet" is a portmanteau of the words mesoscale and network.
The network consists of 120 automated stations covering Oklahoma and each of Oklahoma's counties has at least one station. At each site, the environment is measured by a set of instruments located on or near a -tall tower. The measurements are packaged into “observations” and transmitted to a central facility every 5 minutes, 24 hours per day, every day of the year.
Oklahoma Mesonet is a cooperative venture between Oklahoma State University (OSU) and the University of Oklahoma (OU) and is supported by the taxpayers of Oklahoma. It is headquartered at the National Weather Center (NWC) on the OU campus.
Observations are available free of charge to the public.
Background
According to the Tulsa World, creation of the Oklahoma Mesonet resulted from the inability of emergency management officials to plan for the May 26–27, 1984 flood that killed 14 people in the Tulsa area. The 1984 flood demonstrated that emergency managers could not receive accurate and adequate data quickly enough about the progress of flooding from airport radars, updated hourly. The University of Oklahoma and Oklahoma State University collaborated with the Climatological Survey and other public and private agencies to create the Oklahoma Mesonet. This system collects weather information (e.g., wind speed, rainfall, temperature) every 5 minutes from 121 Mesonet stations throughout Oklahoma. Emergency planners can now monitor up-to-date weather information in advance of the arrival of an approaching storm. The article quoted an official of the Tulsa Area Emergency Management as saying that his staff uses the Oklahoma Mesonet every day.
Products
The Oklahoma Mesonet produces multiple weather products for public consumption and download: these include maps of all of t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A major disturbance in what results in an episode of severe weather called a storm?
A. the core
B. the ozone layer
C. the oceans
D. the atmosphere
Answer:
|
|
sciq-9139
|
multiple_choice
|
Metals, fossil fuels, and water are all examples of what type of resource?
|
[
"ores",
"natural resources",
"renewable resources",
"recyclables"
] |
B
|
Relavent Documents:
Document 0:::
A non-renewable resource (also called a finite resource) is a natural resource that cannot be readily replaced by natural means at a pace quick enough to keep up with consumption. An example is carbon-based fossil fuels. The original organic matter, with the aid of heat and pressure, becomes a fuel such as oil or gas. Earth minerals and metal ores, fossil fuels (coal, petroleum, natural gas) and groundwater in certain aquifers are all considered non-renewable resources, though individual elements are always conserved (except in nuclear reactions, nuclear decay or atmospheric escape).
Conversely, resources such as timber (when harvested sustainably) and wind (used to power energy conversion systems) are considered renewable resources, largely because their localized replenishment can occur within time frames meaningful to humans as well.
Earth minerals and metal ores
Earth minerals and metal ores are examples of non-renewable resources. The metals themselves are present in vast amounts in Earth's crust, and their extraction by humans only occurs where they are concentrated by natural geological processes (such as heat, pressure, organic activity, weathering and other processes) enough to become economically viable to extract. These processes generally take from tens of thousands to millions of years, through plate tectonics, tectonic subsidence and crustal recycling.
The localized deposits of metal ores near the surface which can be extracted economically by humans are non-renewable in human time-frames. There are certain rare earth minerals and elements that are more scarce and exhaustible than others. These are in high demand in manufacturing, particularly for the electronics industry.
Fossil fuels
Natural resources such as coal, petroleum(crude oil) and natural gas take thousands of years to form naturally and cannot be replaced as fast as they are being consumed. Eventually it is considered that fossil-based resources will become too costly to harvest and
Document 1:::
Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order.
Examples: Industrialization, Biology
The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from elec
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The Ultimate Resource is a 1981 book written by Julian Lincoln Simon challenging the notion that humanity was running out of natural resources. It was revised in 1996 as The Ultimate Resource 2.
Overview
The overarching thesis on why there is no resource crisis is that as a particular resource becomes more scarce, its price rises. This price rise creates an incentive for people to discover more of the resource, ration and recycle it, and eventually, develop substitutes. The "ultimate resource" is not any particular physical object but the capacity for humans to invent and adapt.
Scarcity
The work opens with an explanation of scarcity, noting its relation to price; high prices denote relative scarcity and low prices indicate abundance. Simon usually measures prices in wage-adjusted terms, since this is a measure of how much labor is required to purchase a fixed amount of a particular resource. Since prices for most raw materials (e.g., copper) have fallen between 1800 and 1990 (adjusting for wages and adjusting for inflation), Simon argues that this indicates that those materials have become less scarce.
Forecasting
Simon makes a distinction between "engineering” and "economic" forecasting. Engineering forecasting consists of estimating the amount of known physical amount of resources, extrapolates the rate of use from current use and subtracts one from the other. Simon argues that these simple analyses are often wrong. While focusing only on proven resources is helpful in a business context, it is not appropriate for economy-wide forecasting. There exist undiscovered sources, sources not yet economically feasible to extract, sources not yet technologically feasible to extract, and ignored resources that could prove useful but are not yet worth trying to discover.
To counter the problems of engineering forecasting, Simon proposes economic forecasting, which proceeds in three steps in order to capture, in part, the unknowns the engineering method leaves out (p 27)
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Metals, fossil fuels, and water are all examples of what type of resource?
A. ores
B. natural resources
C. renewable resources
D. recyclables
Answer:
|
|
sciq-7624
|
multiple_choice
|
What type of layers do animals' tissues develop from?
|
[
"cytoplasm",
"transgenic",
"dermic",
"embryonic"
] |
D
|
Relavent Documents:
Document 0:::
Histogenesis is the formation of different tissues from undifferentiated cells. These cells are constituents of three primary germ layers, the endoderm, mesoderm, and ectoderm. The science of the microscopic structures of the tissues formed within histogenesis is termed histology.
Germ layers
A germ layer is a collection of cells, formed during animal and mammalian embryogenesis. Germ layers are typically pronounced within vertebrate organisms; however, animals or mammals more complex than sponges (eumetazoans and agnotozoans) produce two or three primary tissue layers. Animals with radial symmetry, such as cnidarians, produce two layers, called the ectoderm and endoderm. They are diploblastic. Animals with bilateral symmetry produce a third layer in-between called mesoderm, making them triploblastic. Germ layers will eventually give rise to all of an animal's or mammal's tissues and organs through a process called organogenesis.
Endoderm
The endoderm is one of the germ layers formed during animal embryogenesis. Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm. Initially, the endoderm consists of flattened cells, which subsequently become columnar...
Mesoderm
The mesoderm germ layer forms in the embryos of animals and mammals more complex than cnidarians, making them triploblastic. During gastrulation, some of the cells migrating inward to form the endoderm form an additional layer between the endoderm and the ectoderm. A theory suggests that this key innovation evolved hundreds of millions of years ago and led to the evolution of nearly all large, complex animals. The formation of a mesoderm led to the formation of a coelom. Organs formed inside a coelom can freely move, grow, and develop independently of the body wall while fluid cushions and protects them from shocks.
Ectoderm
The ectoderm is the start of a tissue that covers the body surfaces. It emerges first and forms from the outermost
Document 1:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 2:::
Endoderm is the innermost of the three primary germ layers in the very early embryo. The other two layers are the ectoderm (outside layer) and mesoderm (middle layer). Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm.
The endoderm consists at first of flattened cells, which subsequently become columnar. It forms the epithelial lining of multiple systems.
In plant biology, endoderm corresponds to the innermost part of the cortex (bark) in young shoots and young roots often consisting of a single cell layer. As the plant becomes older, more endoderm will lignify.
Production
The following chart shows the tissues produced by the endoderm.
The embryonic endoderm develops into the interior linings of two tubes in the body, the digestive and respiratory tube.
Liver and pancreas cells are believed to derive from a common precursor.
In humans, the endoderm can differentiate into distinguishable organs after 5 weeks of embryonic development.
Additional images
See also
Ectoderm
Germ layer
Histogenesis
Mesoderm
Organogenesis
Endodermal sinus tumor
Gastrulation
Cell differentiation
Triploblasty
List of human cell types derived from the germ layers
Document 3:::
A laminar organization describes the way certain tissues, such as bone membrane, skin, or brain tissues, are arranged in layers.
Types
Embryo
The earliest forms of laminar organization are shown in the diploblastic and triploblastic formation of the germ layers in the embryo. In the first week of human embryogenesis two layers of cells have formed, an external epiblast layer (the primitive ectoderm), and an internal hypoblast layer (primitive endoderm). This gives the early bilaminar disc. In the third week in the stage of gastrulation epiblast cells invaginate to form endoderm, and a third layer of cells known as mesoderm. Cells that remain in the epiblast become ectoderm. This is the trilaminar disc and the epiblast cells have given rise to the three germ layers.
Brain
In the brain a laminar organization is evident in the arrangement of the three meninges, the membranes that cover the brain and spinal cord. These membranes are the dura mater, arachnoid mater, and pia mater. The dura mater has two layers a periosteal layer near to the bone of the skull, and a meningeal layer next to the other meninges.
The cerebral cortex, the outer neural sheet covering the cerebral hemispheres can be described by its laminar organization, due to the arrangement of cortical neurons into six distinct layers.
Eye
The eye in mammals has an extensive laminar organization. There are three main layers – the outer fibrous tunic, the middle uvea, and the inner retina. These layers have sublayers with the retina having ten ranging from the outer choroid to the inner vitreous humor and including the retinal nerve fiber layer.
Skin
The human skin has a dense laminar organization. The outer epidermis has four or five layers.
Document 4:::
This is a list of cells in humans derived from the three embryonic germ layers – ectoderm, mesoderm, and endoderm.
Cells derived from ectoderm
Surface ectoderm
Skin
Trichocyte
Keratinocyte
Anterior pituitary
Gonadotrope
Corticotrope
Thyrotrope
Somatotrope
Lactotroph
Tooth enamel
Ameloblast
Neural crest
Peripheral nervous system
Neuron
Glia
Schwann cell
Satellite glial cell
Neuroendocrine system
Chromaffin cell
Glomus cell
Skin
Melanocyte
Nevus cell
Merkel cell
Teeth
Odontoblast
Cementoblast
Eyes
Corneal keratocyte
Neural tube
Central nervous system
Neuron
Glia
Astrocyte
Ependymocytes
Muller glia (retina)
Oligodendrocyte
Oligodendrocyte progenitor cell
Pituicyte (posterior pituitary)
Pineal gland
Pinealocyte
Cells derived from mesoderm
Paraxial mesoderm
Mesenchymal stem cell
Osteochondroprogenitor cell
Bone (Osteoblast → Osteocyte)
Cartilage (Chondroblast → Chondrocyte)
Myofibroblast
Fat
Lipoblast → Adipocyte
Muscle
Myoblast → Myocyte
Myosatellite cell
Tendon cell
Cardiac muscle cell
Other
Fibroblast → Fibrocyte
Other
Digestive system
Interstitial cell of Cajal
Intermediate mesoderm
Renal stem cell
Angioblast → Endothelial cell
Mesangial cell
Intraglomerular
Extraglomerular
Juxtaglomerular cell
Macula densa cell
Stromal cell → Interstitial cell → Telocytes
Simple epithelial cell → Podocyte
Kidney proximal tubule brush border cell
Reproductive system
Sertoli cell
Leydig cell
Granulosa cell
Peg cell
Germ cells (which migrate here primordially)
spermatozoon
ovum
Lateral plate mesoderm
Hematopoietic stem cell
Lymphoid
Lymphoblast
see lymphocytes
Myeloid
CFU-GEMM
see myeloid cells
Circulatory system
Endothelial progenitor cell
Endothelial colony forming cell
Endothelial stem cell
Angioblast/Mesoangioblast
Pericyte
Mural cell
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of layers do animals' tissues develop from?
A. cytoplasm
B. transgenic
C. dermic
D. embryonic
Answer:
|
|
sciq-10284
|
multiple_choice
|
Sodium and chlorine combine to make what?
|
[
"seawater",
"iron",
"salt",
"gold"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
With Sn2+ ions, N2O is formed:
2 HNO2 + 6 HCl + 2 SnCl2 → 2 SnCl4 + N2O + 3 H2O + 2 KCl
With SO2 gas, NH2OH is formed:
2 HNO2 + 6 H2O + 4 SO2 → 3 H2SO4 + K2SO4 + 2 NH2OH
With Zn in alkali solution, NH3 is formed:
5 H2O + KNO2 + 3 Zn → NH3 + KOH + 3 Zn(OH)2
With , both HN3
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Sodium and chlorine combine to make what?
A. seawater
B. iron
C. salt
D. gold
Answer:
|
|
sciq-10527
|
multiple_choice
|
What is caused by the same virus that causes chicken pox?
|
[
"shingles",
"boils",
"blisters",
"mumps"
] |
A
|
Relavent Documents:
Document 0:::
Mycoplasma haemomuris, formerly known as Haemobartonella muris and Bartonella muris, is a Gram-negative bacillus. It is known to cause anemia in rats and mice.
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
Haverhill fever (or epidemic arthritic erythema) is a systemic illness caused by the bacterium Streptobacillus moniliformis, an organism common in rats and mice. If untreated, the illness can have a mortality rate of up to 13%. Among the two types of rat-bite fever, Haverhill fever caused by Streptobacillus moniliformis is most common in North America. The other type of infection caused by Spirillum minus is more common in Asia and is also known as Sodoku.
The initial non-specific presentation of the disease and hurdles in culturing the causative microorganism are at times responsible for a delay or failure in the diagnosis of the disease. Although non-specific in nature, initial symptoms like relapsing fever, rash and migratory polyarthralgia are the most common symptoms of epidemic arthritic erythema.
Bites and scratches from rodents carrying the bacteria are generally responsible for the affliction. However, the disease can be spread even without physical lacerations by rodents. In fact, the disease was first recognized from a milk-associated outbreak which occurred in Haverhill, Massachusetts in January, 1926. The organism S. moniliformis was isolated from the patients and epidemiologically, consumption of milk from one particular dairy was implicated in association with the infection. Hence, ingestion of food and drink contaminated with the bacteria can also result in the development of the disease.
Symptoms and signs
The illness resembles a severe influenza, with a moderate fever (38-40 °C, or 101-104 °F), sore throat, chills, myalgia, headache, vomiting, and a diffuse red rash (maculopapular, petechial, or purpuric), located mostly on the hands and feet. The incubation period for the bacteria generally lasts from three-ten days. As the disease progresses. almost half the patients experience migratory polyarthralgias.
Mechanism
Although the specific form of pathogenesis is still a subject of ongoing research, the bacteria has been observed to result in mo
Document 3:::
Cause, also known as etiology () and aetiology, is the reason or origination of something.
The word etiology is derived from the Greek , aitiologia, "giving a reason for" (, aitia, "cause"; and , -logia).
Description
In medicine, etiology refers to the cause or causes of diseases or pathologies. Where no etiology can be ascertained, the disorder is said to be idiopathic.
Traditional accounts of the causes of disease may point to the "evil eye".
The Ancient Roman scholar Marcus Terentius Varro put forward early ideas about microorganisms in a 1st-century BC book titled On Agriculture.
Medieval thinking on the etiology of disease showed the influence of Galen and of Hippocrates. Medieval European doctors generally held the view that disease was related to the air and adopted a miasmatic approach to disease etiology.
Etiological discovery in medicine has a history in Robert Koch's demonstration that species of the pathogenic bacteria Mycobacterium tuberculosis causes the disease tuberculosis; Bacillus anthracis causes anthrax, and Vibrio cholerae causes cholera. This line of thinking and evidence is summarized in Koch's postulates. But proof of causation in infectious diseases is limited to individual cases that provide experimental evidence of etiology.
In epidemiology, several lines of evidence together are required to for causal inference. Austin Bradford Hill demonstrated a causal relationship between tobacco smoking and lung cancer, and summarized the line of reasoning in the Bradford Hill criteria, a group of nine principles to establish epidemiological causation. This idea of causality was later used in a proposal for a Unified concept of causation.
Disease causative agent
The infectious diseases are caused by infectious agents or pathogens. The infectious agents that cause disease fall into five groups: viruses, bacteria, fungi, protozoa, and helminths (worms).
The term can also refer to a toxin or toxic chemical that causes illness.
Chain of causatio
Document 4:::
Infections associated with diseases are those infections that are associated with possible infectious etiologies that meet the requirements of Koch's postulates. Other methods of causation are described by the Bradford Hill criteria and evidence-based medicine.
Koch's postulates have been modified by some epidemiologists, based on the sequence-based detection of distinctive pathogenic nucleic acid sequences in tissue samples. When using this method, absolute statements regarding causation are not always possible. Higher amounts of distinctive pathogenic nucleic acid sequences should be in those exhibiting disease, compared to controls. In addition, the DNA load should become lower with the resolution of the disease. The distinctive pathogenic nucleic acid sequences load should also increase upon recurrence.
Other conditions are met to establish cause or association including studies in disease transmission. This means that there should be a high disease occurrence in those carrying a pathogen, evidence of a serological response to the pathogen, and the success of vaccination prevention. Direct visualization of the pathogen, the identification of different strains, immunological responses in the host, how the infection is spread and, the combination of these should all be taken into account to determine the probability that an infectious agent is the cause of the disease. A conclusive determination of a causal role of an infectious agent for in a particular disease using Koch's postulates is desired yet this might not be possible.
The leading cause of death worldwide is cardiovascular disease, but infectious diseases are the second leading cause of death worldwide and the leading cause of death in infants and children.
Other causes
Other causes or associations of disease are: a compromised immune system, environmental toxins, radiation exposure, diet and other lifestyle choices, stress, and genetics. Diseases may also be multifactorial, requiring multiple factor
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is caused by the same virus that causes chicken pox?
A. shingles
B. boils
C. blisters
D. mumps
Answer:
|
|
sciq-10664
|
multiple_choice
|
What are two major categories of mutations?
|
[
"germline and somatic",
"plasticity and somatic",
"german and somatic",
"homologous and somatic"
] |
A
|
Relavent Documents:
Document 0:::
In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene.
Mutants arise by mutation
Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone.
Etymology
Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change".
Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a devel
Document 1:::
The genotype–phenotype map is a conceptual model in genetic architecture. Coined in a 1991 paper by Pere Alberch, it models the interdependency of genotype (an organism's full hereditary information) with phenotype (an organism's actual observed properties).
Application
The map visualises a relationship between genotype & phenotype which, crucially:
is of greater complexity than a straightforward one-to-one mapping of genotype to/from phenotype.
accommodates a parameter space, along which at different points a given phenotype is said to be more or less stable.
accommodates transformational boundaries in the parameter space, which divide phenotype states from one another.
accounts for different polymorphism and/or polyphenism in populations, depending on their area of parameter space they occupy.
Document 2:::
A mutant protein is the protein product encoded by a gene with mutation. Mutated protein can have single amino acid change (minor, but still in many cases significant change leading to disease) or wide-range amino acid changes by e.g. truncation of C-terminus after introducing premature stop codon.
See also
Site-directed mutagenesis
Phi value analysis
missense mutation
nonsense mutation
point mutation
frameshift mutation
silent mutation
single-nucleotide polymorphism
Document 3:::
Major gene is a gene with pronounced phenotype expression, in contrast to a modifier gene. Major gene characterizes common expression of oligogenic series, i.e. a small number of genes that determine the same trait.
Major genes control the discontinuous or qualitative characters in contrast of minor genes or polygenes with individually small effects. Major genes segregate and may be easily subject to mendelian analysis. The gene categorization into major and minor determinants is more or less arbitrary. Both of the two types are in all probability only end points in a more or less continuous series of gene action and gene interactions.
The term major gene was introduced into the science of inheritance by Keneth Mather (1941).
See also
Gene interaction
Minor gene
Gene
Document 4:::
The Bateson Lecture is an annual genetics lecture held as a part of the John Innes Symposium since 1972, in honour of the first Director of the John Innes Centre, William Bateson.
Past Lecturers
Source: John Innes Centre
1951 Sir Ronald Fisher - "Statistical methods in Genetics"
1953 Julian Huxley - "Polymorphic variation: a problem in genetical natural history"
1955 Sidney C. Harland - "Plant breeding: present position and future perspective"
1957 J.B.S. Haldane - "The theory of evolution before and after Bateson"
1959 Kenneth Mather - "Genetics Pure and Applied"
1972 William Hayes - "Molecular genetics in retrospect"
1974 Guido Pontecorvo - "Alternatives to sex: genetics by means of somatic cells"
1976 Max F. Perutz - "Mechanism of respiratory haemoglobin"
1979 J. Heslop-Harrison - "The forgotten generation: some thoughts on the genetics and physiology of Angiosperm Gametophytes "
1982 Sydney Brenner - "Molecular genetics in prospect"
1984 W.W. Franke - "The cytoskeleton - the insoluble architectural framework of the cell"
1986 Arthur Kornberg - "Enzyme systems initiating replication at the origin of the E. coli chromosome"
1988 Gottfried Schatz - "Interaction between mitochondria and the nucleus"
1990 Christiane Nusslein-Volhard - "Axis determination in the Drosophila embryo"
1992 Frank Stahl - "Genetic recombination: thinking about it in phage and fungi"
1994 Ira Herskowitz - "Violins and orchestras: what a unicellular organism can do"
1996 R.J.P. Williams - "An Introduction to Protein Machines"
1999 Eugene Nester - "DNA and Protein Transfer from Bacteria to Eukaryotes - the Agrobacterium story"
2001 David Botstein - "Extracting biological information from DNA Microarray Data"
2002 Elliot Meyerowitz
2003 Thomas Steitz - "The Macromolecular machines of gene expression"
2008 Sean Carroll - "Endless flies most beautiful: the role of cis-regulatory sequences in the evolution of animal form"
2009 Sir Paul Nurse - "Genetic transmission through
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are two major categories of mutations?
A. germline and somatic
B. plasticity and somatic
C. german and somatic
D. homologous and somatic
Answer:
|
|
sciq-5487
|
multiple_choice
|
When the bud is fully developed, it breaks away from the parent cell and forms a new what?
|
[
"flower",
"stem",
"organism",
"leaf"
] |
C
|
Relavent Documents:
Document 0:::
Important structures in plant development are buds, shoots, roots, leaves, and flowers; plants produce these tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. However, both plants and animals pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification.
According to plant physiologist A. Carl Leopold, the properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts."
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the
Document 1:::
Plant embryonic development, also plant embryogenesis is a process that occurs after the fertilization of an ovule to produce a fully developed plant embryo. This is a pertinent stage in the plant life cycle that is followed by dormancy and germination. The zygote produced after fertilization must undergo various cellular divisions and differentiations to become a mature embryo. An end stage embryo has five major components including the shoot apical meristem, hypocotyl, root meristem, root cap, and cotyledons. Unlike the embryonic development in animals, and specifically in humans, plant embryonic development results in an immature form of the plant, lacking most structures like leaves, stems, and reproductive structures. However, both plants and animals including humans, pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification.
Morphogenic events
Embryogenesis occurs naturally as a result of single, or double fertilization, of the ovule, giving rise to two distinct structures: the plant embryo and the endosperm which go on to develop into a seed. The zygote goes through various cellular differentiations and divisions in order to produce a mature embryo. These morphogenic events form the basic cellular pattern for the development of the shoot-root body and the primary tissue layers; it also programs the regions of meristematic tissue formation. The following morphogenic events are only particular to eudicots, and not monocots.
Plant
Following fertilization, the zygote and endosperm are present within the ovule, as seen in stage I of the illustration on this page. Then the zygote undergoes an asymmetric transverse cell division that gives rise to two cells - a small apical cell resting above a large basal cell.
These two cells are very different, and give rise to different structures, establishing polarity in the embryo.
apical cellThe small apical cell is on the top and contains
Document 2:::
In botany, a plant shoot consists of any plant stem together with its appendages like, leaves and lateral buds, flowering stems, and flower buds. The new growth from seed germination that grows upward is a shoot where leaves will develop. In the spring, perennial plant shoots are the new growth that grows from the ground in herbaceous plants or the new stem or flower growth that grows on woody plants.
In everyday speech, shoots are often synonymous with stems. Stems, which are an integral component of shoots, provide an axis for buds, fruits, and leaves.
Young shoots are often eaten by animals because the fibers in the new growth have not yet completed secondary cell wall development, making the young shoots softer and easier to chew and digest.
As shoots grow and age, the cells develop secondary cell walls that have a hard and tough structure.
Some plants (e.g. bracken) produce toxins that make their shoots inedible or less palatable.
Shoot types of woody plants
Many woody plants have distinct short shoots and long shoots. In some angiosperms, the short shoots, also called spur shoots or fruit spurs, produce the majority of flowers and fruit. A similar pattern occurs in some conifers and in Ginkgo, although the "short shoots" of some genera such as Picea are so small that they can be mistaken for part of the leaf that they have produced.
A related phenomenon is seasonal heterophylly, which involves visibly different leaves from spring growth and later lammas growth. Whereas spring growth mostly comes from buds formed the previous season, and often includes flowers, lammas growth often involves long shoots.
See also
Bud
Crown (botany)
Heteroblasty (botany), an abrupt change in the growth pattern of some plants as they mature
Lateral shoot
Phyllotaxis, the arrangement of leaves along a plant stem
Seedling
Sterigma, the "woody peg" below the leaf of some conifers
Thorn (botany), true thorns, as distinct from spines or prickles, are short shoots
Document 3:::
The axillary bud (or lateral bud) is an embryonic or organogenic shoot located in the axil of a leaf. Each bud has the potential to form shoots, and may be specialized in producing either vegetative shoots (stems and branches) or reproductive shoots (flowers). Once formed, a bud may remain dormant for some time, or it may form a shoot immediately.
Overview
An axillary bud is an embryonic or organogenic shoot which lies dormant at the junction of the stem and petiole of a plant. It arises exogenously from outer layer of cortex of the stem.
Axillary buds do not become actively growing shoots on plants with strong apical dominance (the tendency to grow just the terminal bud on the main stem). Apical dominance occurs because the shoot apical meristem produces auxin which prevents axillary buds from growing. The axillary buds begin developing when they are exposed to less auxin, for example if the plant naturally has weak apical dominance, if apical dominance is broken by removing the terminal bud, or if the terminal bud has grown far enough away for the auxin to have less of an effect.
An example of axillary buds are the eyes of the potato.
Effects of auxin
As the apical meristem grows and forms leaves, a region of meristematic cells is left behind at the node between the stem and the leaf. These axillary buds are usually dormant, inhibited by auxin produced by the apical meristem, which is known as apical dominance.
If the apical meristem is removed, or has grown a sufficient distance away from an axillary bud, the axillary bud may become activated (or more appropriately freed from hormone inhibition). Like the apical meristem, axillary buds can develop into a stem or flower.
Diseases that affect axillary buds
Certain plant diseases - notably phytoplasmas - can cause the proliferation of axillary buds, and cause plants to become bushy in appearance.
Document 4:::
In biology and botany, indeterminate growth is growth that is not terminated in contrast to determinate growth that stops once a genetically pre-determined structure has completely formed. Thus, a plant that grows and produces flowers and fruit until killed by frost or some other external factor is called indeterminate. For example, the term is applied to tomato varieties that grow in a rather gangly fashion, producing fruit throughout the growing season. In contrast, a determinate tomato plant grows in a more bushy shape and is most productive for a single, larger harvest, then either tapers off with minimal new growth or fruit or dies.
Inflorescences
In reference to an inflorescence (a shoot specialised for bearing flowers, and bearing no leaves other than bracts), an indeterminate type (such as a raceme) is one in which the first flowers to develop and open are from the buds at the base, followed progressively by buds nearer to the growing tip. The growth of the shoot is not impeded by the opening of the early flowers or development of fruits and its appearance is of growing, producing, and maturing flowers and fruit indefinitely. In practice the continued growth of the terminal end necessarily peters out sooner or later, though without producing any definite terminal flower, and in some species it may stop growing before any of the buds have opened.
Not all plants produce indeterminate inflorescences however; some produce a definite terminal flower that terminates the development of new buds towards the tip of that inflorescence. In most species that produce a determinate inflorescence in this way, all of the flower buds are formed before the first ones begin to open, and all open more or less at the same time. In some species with determinate inflorescences however, the terminal flower blooms first, which stops the elongation of the main axis, but side buds develop lower down. One type of example is Dianthus; another type is exemplified by Allium; and yet ot
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When the bud is fully developed, it breaks away from the parent cell and forms a new what?
A. flower
B. stem
C. organism
D. leaf
Answer:
|
|
ai2_arc-725
|
multiple_choice
|
Why is it winter in North America when it is summer in South America?
|
[
"The south is always warmer than the north.",
"There is less land than water in the south.",
"North America receives less direct sunlight during the winter.",
"When it is December in North America, it is June in South America."
] |
C
|
Relavent Documents:
Document 0:::
Polar ecology is the relationship between plants and animals in a polar environment. Polar environments are in the Arctic and Antarctic regions. Arctic regions are in the Northern Hemisphere, and it contains land and the islands that surrounds it. Antarctica is in the Southern Hemisphere and it also contains the land mass, surrounding islands and the ocean. Polar regions also contain the subantarctic and subarctic zone which separate the polar regions from the temperate regions. Antarctica and the Arctic lie in the polar circles. The polar circles are imaginary lines shown on maps to be the areas that receives less sunlight due to less radiation. These areas either receive sunlight (midnight sun) or shade (polar night) 24 hours a day because of the earth's tilt. Plants and animals in the polar regions are able to withstand living in harsh weather conditions but are facing environmental threats that limit their survival.
Climate
Polar climates are cold, windy and dry. Because of the lack of precipitation and low temperatures the Arctic and Antarctic are considered the world's largest deserts or Polar deserts. Much of the radiation from the sun that is received is reflected off the snow making the polar regions cold. When the radiation is reflected, the heat is also reflected. The polar regions reflect 89-90% of the sun radiation that the earth receives. And because Antarctica is closer to the sun at perihelion, it receives 7% more radiation than the Arctic. Also in the polar region, the atmosphere is thin. Because of this the UV radiation that gets to the atmosphere can cause fast sun tanning and snow blindness.
Polar regions are dry areas; there is very little precipitation due to the cold air. There are some times when the humidity may be high but the water vapor present in the air may be low. Wind is also strong in the polar region. Wind carries snow creating blizzard like conditions. Winds may also move small organisms or vegetation if it is present. The wind
Document 1:::
Climatic adaptation refers to adaptations of an organism that are triggered due to the patterns of variation of abiotic factors that determine a specific climate. Annual means, seasonal variation and daily patterns of abiotic factors are properties of a climate where organisms can be adapted to. Changes in behavior, physical structure, internal mechanisms and metabolism are forms of adaptation that is caused by climate properties. Organisms of the same species that occur in different climates can be compared to determine which adaptations are due to climate and which are influenced majorly by other factors. Climatic adaptations limits to adaptations that have been established, characterizing species that live within the specific climate. It is different from climate change adaptations which refers to the ability to adapt to gradual changes of a climate. Once a climate has changed, the climate change adaptation that led to the survival of the specific organisms as a species can be seen as a climatic adaptation. Climatic adaptation is constrained by the genetic variability of the species in question.
Climate patterns
The patterns of variation of abiotic factors determine a climate and thus climatic adaptation. There are many different climates around the world, each with its unique patterns. Because of this, the manner of climatic adaptation shows large differences between the climates. A subarctic climate, for instance, shows daylight time and temperature fluctuations as most important factors, while in rainforest climate, the most important factor is characterized by the stable high precipitation rate and high average temperature that doesn't fluctuate a lot. Humid continental climate is marked by seasonal temperature variances which commonly lead to seasonal climate adaptations. Because the variance of these abiotic factors differ depending on the type of climate, differences in the manner of climatic adaptation are expected.
Research
Research on climatic adaptat
Document 2:::
Frost resistance is the ability of plants to survive cold temperatures. Generally, land plants of the northern hemisphere have higher frost resistance than those of the southern hemisphere. An example of a frost resistant plant is Drimys winteri which is more frost-tolerant than naturally occurring conifers and vessel-bearing angiosperms such as the Nothofagus that can be found in its range in southern South America.
Document 3:::
The seasonal semideciduous forest is a vegetation type that belongs to the Atlantic Forest biome (Inland Atlantic Forest), but is also found occasionally in the Cerrado. Typical of central Brazil, it is caused by a double climatic seasonality: a season of intense summer rains followed by a period of drought. It is composed of phanerophytes with leaf buds that are protected from drought by scales (cataphylls or hairs), having deciduous sclerophyllous or membranaceous adult leaves. The degree of deciduousness, i.e. leaf loss, is dependent on the intensity and duration of basically two reasons: minimum and maximum temperatures and water balance deficiency. The percentage of deciduous trees in the forest as a whole is 20-50%.
The vegetation is located in the north and west of Paraná, region of the third plateau, where it presents different types of soil. It is also widely distributed in the southern portion of Mato Grosso do Sul, interspersed between fields up to the 21st parallel, where it appears in riparian forests, being called alluvial seasonal semideciduous forest.
Terminology
According to Rodrigues (1999), the seasonal semideciduous forest (IBGE, 1993) corresponds approximately to the following designations:
subtropical rain forest (Wettstein, 1904);
inland rain forests (Campos, 1912);
tropical semideciduous broadleaved forest (Kuhlmann, 1956);
tropical seasonal rain forest of the south-central plateau (Veloso, 1962);
mesophytic semideciduous forest (Rizzini, 1963);
sub-caducifolia or tropical seasonal forest (Andrade-Lima, 1966);
semideciduous plateau forest (Eiten, 1970);
subtropical foliated forests (Hueck, 1972);
submontane seasonal semideciduous forest (Veloso and Góes Filho, 1982);
semideciduous latifolia forest or plateau forest (Leitão Filho, 1982);
Mata de Cipó.
Categories
There is an IBGE (2012) altimetric division to delimit study regions, which is:
alluvial seasonal semideciduous forest: most frequent in the Pantanal;
lowland seaso
Document 4:::
The subarctic climate (also called subpolar climate, or boreal climate) is a continental climate with long, cold (often very cold) winters, and short, warm to cool summers. It is found on large landmasses, often away from the moderating effects of an ocean, generally at latitudes from 50°N to 70°N, poleward of the humid continental climates. Subarctic or boreal climates are the source regions for the cold air that affects temperate latitudes to the south in winter. These climates represent Köppen climate classification Dfc, Dwc, Dsc, Dfd, Dwd and Dsd.
Description
This type of climate offers some of the most extreme seasonal temperature variations found on the planet: in winter, temperatures can drop to below and in summer, the temperature may exceed . However, the summers are short; no more than three months of the year (but at least one month) must have a 24-hour average temperature of at least to fall into this category of climate, and the coldest month should average below (or ). Record low temperatures can approach .
With 5–7 consecutive months when the average temperature is below freezing, all moisture in the soil and subsoil freezes solidly to depths of many feet. Summer warmth is insufficient to thaw more than a few surface feet, so permafrost prevails under most areas not near the southern boundary of this climate zone. Seasonal thaw penetrates from , depending on latitude, aspect, and type of ground. Some northern areas with subarctic climates located near oceans (southern Alaska, the northern fringe of Europe, Sakhalin Oblast and Kamchatka Oblast), have milder winters and no permafrost, and are more suited for farming unless precipitation is excessive. The frost-free season is very short, varying from about 45 to 100 days at most, and a freeze can occur anytime outside the summer months in many areas.
Description
The first D indicates continentality, with the coldest month below (or ).
The second letter denotes precipitation patterns:
s: A dry
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Why is it winter in North America when it is summer in South America?
A. The south is always warmer than the north.
B. There is less land than water in the south.
C. North America receives less direct sunlight during the winter.
D. When it is December in North America, it is June in South America.
Answer:
|
|
sciq-9367
|
multiple_choice
|
What is the term for organisms that make their own food?
|
[
"plastids",
"monocots",
"omnivores",
"autotrophs"
] |
D
|
Relavent Documents:
Document 0:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 1:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 2:::
Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
Document 3:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 4:::
A monogastric organism has a simple single-chambered stomach (one stomach). Examples of monogastric herbivores are horses and rabbits. Examples of monogastric omnivores include humans, pigs, hamsters and rats. Furthermore, there are monogastric carnivores such as cats. A monogastric organism is comparable to ruminant organisms (which has a four-chambered complex stomach), such as cattle, goats, or sheep. Herbivores with monogastric digestion can digest cellulose in their diets by way of symbiotic gut bacteria. However, their ability to extract energy from cellulose digestion is less efficient than in ruminants.
Herbivores digest cellulose by microbial fermentation. Monogastric herbivores which can digest cellulose nearly as well as ruminants are called hindgut fermenters, while ruminants are called foregut fermenters. These are subdivided into two groups based on the relative size of various digestive organs in relationship to the rest of the system: colonic fermenters tend to be larger species such as horses and rhinos, and cecal fermenters are smaller animals such as rabbits and rodents. Great apes derive significant amounts of phytanic acid from the hindgut fermentation of plant materials.
Monogastrics cannot digest the fiber molecule cellulose as efficiently as ruminants, though the ability to digest cellulose varies amongst species.
A monogastric digestive system works as soon as the food enters the mouth. Saliva moistens the food and begins the digestive process. (Note that horses have no (or negligible amounts of) amylase in their saliva). After being swallowed, the food passes from the esophagus into the stomach, where stomach acid and enzymes help to break down the food. Once food leaves the stomach and enters the small intestine, the pancreas secretes enzymes and alkali to neutralize the stomach acid.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for organisms that make their own food?
A. plastids
B. monocots
C. omnivores
D. autotrophs
Answer:
|
|
ai2_arc-755
|
multiple_choice
|
Aquaculture is the raising of freshwater and marine plants and animals for food. How would a company raising fish best demonstrate good stewardship of natural resources?
|
[
"They would raise fish with as little pollution as possible.",
"They would raise fish as economically as possible.",
"They would raise fish from as many species as possible.",
"They would raise fish to be as flavorful as possible."
] |
A
|
Relavent Documents:
Document 0:::
Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity.
Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others.
Fisheries research
Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a
Document 1:::
The Bachelor of Fisheries Science (B.F.Sc) is a bachelor's degree for studies in fisheries science in India. "Fisheries science" is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of aquaculture including breeding, genetics, biotechnology, nutrition, farming, diagnosis of diseases in fishes, other aquatic resources, medical treatment of aquatic animals; fish processing including curing, canning, freezing, value addition, byproducts and waste utilization, quality assurance and certification, fisheries microbiology, fisheries biochemistry; fisheries resource management including biology, anatomy, taxonomy, physiology, population dynamics; fisheries environment including oceanography, limnology, ecology, biodiversity, aquatic pollution; fishing technology including gear and craft engineering, navigation and seamanship, marine engines; fisheries economics and management and fisheries extension. Fisheries science is generally a 4-year course typically taught in a university setting, and can be the focus of an undergraduate, postgraduate or Ph.D. program. Bachelor level fisheries courses (B.F.Sc) were started by the state agricultural universities to make available the much needed technically competent personnel for teaching, research and development and transfer of technology in the field of fisheries science.
History
Fisheries education in India, started with the establishment of the Central Institute of Fisheries Education, Mumbai in 1961 for in service training and later the establishment of the first Fisheries College at Mangalore under the State Agricultural University (SAU) system in 1969, has grown manifold and evolved in the last four decades as a professional discipline consisting of Bachelors, Masters and Doctoral programmes in various branches of Fisheries Science. At present, 25 Fisheries Colleges offer four-year degree programme in Bachelor of Fisheries Science (B.F.Sc), whi
Document 2:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 3:::
Ethnoichthyology is an area in anthropology that examines human knowledge of fish, the uses of fish, and importance of fish in different human societies. It draws on knowledge from many different areas including ichthyology, economics, oceanography, and marine botany.
This area of study seeks to understand the details of the interactions of humans with fish, including both cognitive and behavioural aspects. A knowledge of fish and their life strategies is extremely important to fishermen. In order to conserve fish species, it is also important to be aware of other cultures' knowledge of fish. Ignorance of the effects of human activity on fish populations may endanger fish species. Knowledge of fish can be gained through experience, scientific research, or information passed down through generations. Some factors that affect the amount of knowledge acquired include the value and abundance of the various types of fish, their usefulness in fisheries, and the amount of time one spends observing the fishes' life history patterns.
Etymology
The term was first used in the scientific literature by W.T. Morrill. He justified the origin and use of this term by stating that it arose from the model of "ethnobotany".
Importance in conservation
Ethnoichthyology can be very useful to the study and investigation of environmental changes caused by anthropogenic factors, such as the decline of fish stocks, the disappearance of fish species, and the introduction of non-native species of fish in certain environments. Ethnoichthyological knowledge can be used to create environmental conservation strategies. With a sound knowledge of fish ecology, informed decisions with respect to fishing practices can be made, and destructive environmental practices can be avoided. Ethnoichthyological knowledge can be the difference between conserving a species of fish, or placing a moratorium on fishing.
Newfoundland's cod fishery collapse
The collapse of the cod fishery in Newfoundland and Lab
Document 4:::
The goal of fisheries management is to produce sustainable biological, environmental and socioeconomic benefits from renewable aquatic resources. Wild fisheries are classified as renewable when the organisms of interest (e.g., fish, shellfish, amphibians, reptiles and marine mammals) produce an annual biological surplus that with judicious management can be harvested without reducing future productivity. Fishery management employs activities that protect fishery resources so sustainable exploitation is possible, drawing on fisheries science and possibly including the precautionary principle.
Modern fisheries management is often referred to as a governmental system of appropriate environmental management rules based on defined objectives and a mix of management means to implement the rules, which are put in place by a system of monitoring control and surveillance. An ecosystem approach to fisheries management has started to become a more relevant and practical way to manage fisheries. According to the Food and Agriculture Organization of the United Nations (FAO), there are "no clear and generally accepted definitions of fisheries management". However, the working definition used by the FAO and much cited elsewhere is:
The integrated process of information gathering, analysis, planning, consultation, decision-making, allocation of resources and formulation and implementation, with necessary law enforcement to ensure environmental compliance, of regulations or rules which govern fisheries activities in order to ensure the continued productivity of the resources and the accomplishment of other fisheries objectives.
Objectives
Political
According to the FAO, fisheries management should be based explicitly on political objectives, ideally with transparent priorities. Political goals can also be a weak part of fisheries management, since the objectives can conflict with each other. Typical political objectives when exploiting a commercially important fish resource are
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Aquaculture is the raising of freshwater and marine plants and animals for food. How would a company raising fish best demonstrate good stewardship of natural resources?
A. They would raise fish with as little pollution as possible.
B. They would raise fish as economically as possible.
C. They would raise fish from as many species as possible.
D. They would raise fish to be as flavorful as possible.
Answer:
|
|
sciq-6854
|
multiple_choice
|
Red blood cells, white blood cells and what other cell is found in blood?
|
[
"antibodies",
"platelets",
"plasma cells",
"hemoglobin"
] |
B
|
Relavent Documents:
Document 0:::
White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood
Document 1:::
A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
Document 2:::
Lymph node stromal cells are essential to the structure and function of the lymph node whose functions include: creating an internal tissue scaffold for the support of hematopoietic cells; the release of small molecule chemical messengers that facilitate interactions between hematopoietic cells; the facilitation of the migration of hematopoietic cells; the presentation of antigens to immune cells at the initiation of the adaptive immune system; and the homeostasis of lymphocyte numbers. Stromal cells originate from multipotent mesenchymal stem cells.
Structure
Lymph nodes are enclosed in an external fibrous capsule, from which thin walls of sinew called trabeculae penetrate into the lymph node, partially dividing it. Beneath the external capsule and along the courses of the trabeculae, are peritrabecular and subcapsular sinuses. These sinuses are cavities containing macrophages (specialised cells which help to keep the extracellular matrix in order).
The interior of the lymph node has two regions: the cortex and the medulla. In the cortex, lymphoid tissue is organized into nodules. In the nodules, T lymphocytes are located in the T cell zone. B lymphocytes are located in the B cell follicle. The primary B cell follicle matures in germinal centers. In the medulla are hematopoietic cells (which contribute to the formation of the blood) and stromal cells.
Near the medulla is the hilum of lymph node. This is the place where blood vessels enter and leave the lymph node and lymphatic vessels leave the lymph node. Lymph vessels entering the node do so along the perimeter (outer surface).
Function
The lymph nodes, the spleen and Peyer's patches, together are known as secondary lymphoid organs. Lymph nodes are found between lymphatic ducts and blood vessels. Afferent lymphatic vessels bring lymph fluid from the peripheral tissues to the lymph nodes. The lymph tissue in the lymph nodes consists of immune cells (95%), for example lymphocytes, and stromal cells (1% to
Document 3:::
The red pulp of the spleen is composed of connective tissue known also as the cords of Billroth and many splenic sinusoids that are engorged with blood, giving it a red color. Its primary function is to filter the blood of antigens, microorganisms, and defective or worn-out red blood cells.
The spleen is made of red pulp and white pulp, separated by the marginal zone; 76-79% of a normal spleen is red pulp. Unlike white pulp, which mainly contains lymphocytes such as T cells, red pulp is made up of several different types of blood cells, including platelets, granulocytes, red blood cells, and plasma.
The red pulp also acts as a large reservoir for monocytes. These monocytes are found in clusters in the Billroth's cords (red pulp cords). The population of monocytes in this reservoir is greater than the total number of monocytes present in circulation. They can be rapidly mobilised to leave the spleen and assist in tackling ongoing infections.
Sinusoids
The splenic sinusoids, are wide vessels that drain into pulp veins which themselves drain into trabecular veins. Gaps in the endothelium lining the sinusoids mechanically filter blood cells as they enter the spleen. Worn-out or abnormal red cells attempting to squeeze through the narrow intercellular spaces become badly damaged, and are subsequently devoured by macrophages in the red pulp. In addition to clearing aged red blood cells, the sinusoids also filter out cellular debris, particles that could clutter up the bloodstream.
Cells found in red pulp
Red pulp consists of a dense network of fine reticular fiber, continuous with those of the splenic trabeculae, to which are applied flat, branching cells. The meshes of the reticulum are filled with blood:
White blood cells are found to be in larger proportion than they are in ordinary blood.
Large rounded cells, termed splenic cells, are also seen; these are capable of ameboid movement, and often contain pigment and red-blood corpuscles in their interior.
The cell
Document 4:::
In haematology atypical localization of immature precursors (ALIP) refers to finding of atypically localized precursors (myeloblasts and promyelocytes) on bone marrow biopsy. In healthy humans, precursors are rare and are found localized near the endosteum, and consist of 1-2 cells. In some cases of myelodysplastic syndromes, immature precursors might be located in the intertrabecular region and occasionally aggregate as clusters of 3 ~ 5 cells. The presence of ALIPs is associated with worse prognosis of MDS . Recently, in bone marrow sections of patients with acute myeloid leukemia cells similar to ALIPs were defined as ALIP-like clusters. The presence of ALIP-like clusters in AML patients within remission was reported to be associated with early relapse of the disease.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Red blood cells, white blood cells and what other cell is found in blood?
A. antibodies
B. platelets
C. plasma cells
D. hemoglobin
Answer:
|
|
ai2_arc-77
|
multiple_choice
|
Giant redwood trees change energy from one form to another. How is energy changed by the trees?
|
[
"They change chemical energy into kinetic energy.",
"They change solar energy into chemical energy.",
"They change wind energy into heat energy.",
"They change mechanical energy into solar energy."
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Transpirational cooling is the cooling provided as plants transpire water. Excess heat generated from solar radiation is damaging to plant cells and thermal injury occurs during drought or when there is rapid transpiration which produces wilting. Green vegetation contributes to moderating climate by being cooler than adjacent bare earth or constructed areas. As plant leaves transpire they use energy to evaporate water aggregating up to a huge volume globally every day.
An individual tree transpiring 100 litres of water is equivalent to a cooling power of 70 kWh. Urban heat island effects can be attributed to the replacement of vegetation by constructed surfaces. Deforested areas reveal a higher temperature than adjacent intact forest. Forests and other natural ecosystems support climate stabilisation.
The Earth’s energy budget reveals pathways to mitigate climate change using our knowledge of the efficacy of how plants cool and moderating Western approaches with proven indigenous and traditional sources of knowledge.
Transpiration and cooling
Evapotranspiration is the combined processes moving water from the earth’s surface into the atmosphere. Transpiration is the movement of water through a plant and out of its leaves and other aerial parts into the atmosphere. This movement is driven by solar energy. In the tallest trees, such as Sequoia sempervirens, the water rises well over 100 metres from root-tip to canopy leaves. Such trees also exploit evaporation to keep the surface cool. Water vapour from evapotranspiration mixed with air moves upwards to the point of saturation and then, helped by the emissions of cloud condensation nuclei, forms clouds. Each gram molecule (mole) of condensing water will bring about a marked 1200-fold plus reduction in volume.The simultaneous release of latent heat will drive air from below to fill the partial vacuum. The energy required for the surrounding air to move in is readily calculated from the small (one-fifteenth of late
Document 3:::
Energy forestry is a form of forestry in which a fast-growing species of tree or woody shrub is grown specifically to provide biomass or biofuel for heating or power generation.
The two forms of energy forestry are short rotation coppice and short rotation forestry:
Short rotation coppice may include tree crops of poplar, willow or eucalyptus, grown for two to five years before harvest.
Short rotation forestry are crops of alder, ash, birch, eucalyptus, poplar, and sycamore, grown for eight to twenty years before harvest.
Benefits
The main advantage of using "grown fuels", as opposed to fossil fuels such as coal, natural gas and oil, is that while they are growing they absorb the near-equivalent in carbon dioxide (an important greenhouse gas) to that which is later released in their burning. In comparison, burning fossil fuels increases atmospheric carbon unsustainably, by using carbon that was added to the Earth's carbon sink millions of years ago. This is a prime contributor to climate change.
According to the FAO, compared to other energy crops, wood is among the most efficient sources of bioenergy in terms of quantity of energy released by unit of carbon emitted. Other advantages of generating energy from trees, as opposed to agricultural crops, are that trees do not have to be harvested each year, the harvest can be delayed when market prices are down, and the products can fulfil a variety of end-uses.
Yields of some varieties can be as high as 11 oven dry tonnes per hectare every year. However, commercial experience on plantations in Scandinavia have shown lower yield rates.
These crops can also be used in bank stabilisation and phytoremediation. In fact, experiments in Sweden with willow plantations have proved to have many beneficial effects on the soil and water quality when compared to conventional agricultural crops (such as cereal). This beneficial effects have been the basis for the designed of multifunctional production systems to meet emerging b
Document 4:::
Arboriculture () is the cultivation, management, and study of individual trees, shrubs, vines, and other perennial woody plants. The science of arboriculture studies how these plants grow and respond to cultural practices and to their environment. The practice of arboriculture includes cultural techniques such as selection, planting, training, fertilization, pest and pathogen control, pruning, shaping, and removal.
Overview
A person who practices or studies arboriculture can be termed an arborist or an arboriculturist. A tree surgeon is more typically someone who is trained in the physical maintenance and manipulation of trees and therefore more a part of the arboriculture process rather than an arborist. Risk management, legal issues, and aesthetic considerations have come to play prominent roles in the practice of arboriculture. Businesses often need to hire arboriculturists to complete "tree hazard surveys" and generally manage the trees on-site to fulfill occupational safety and health obligations.
Arboriculture is primarily focused on individual woody plants and trees maintained for permanent landscape and amenity purposes, usually in gardens, parks or other populated settings, by arborists, for the enjoyment, protection, and benefit of people.
Arboricultural matters are also considered to be within the practice of urban forestry yet the clear and separate divisions are not distinct or discreet.
Tree Benefits
Tree benefits are the economic, ecological, social and aesthetic use, function purpose, or services of a tree (or group of trees), in its situational context in the landscape.
Environmental tree benefits
Erosion control and soil retention
Improved water infiltration and percolation
Protection from exposure: windbreak, shade, impact from hail/rainfall
Humidification of the air
Food for decomposers, consumers, and pollinators
Soil health: organic matter accumulation from leaf litter and root exudates (symbiotic microbes)
Ecological habitat
Mod
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Giant redwood trees change energy from one form to another. How is energy changed by the trees?
A. They change chemical energy into kinetic energy.
B. They change solar energy into chemical energy.
C. They change wind energy into heat energy.
D. They change mechanical energy into solar energy.
Answer:
|
|
sciq-1225
|
multiple_choice
|
Changing levels of what substances partly explain emotional ups and downs in teens?
|
[
"nutrients",
"hormones",
"enzymes",
"acids"
] |
B
|
Relavent Documents:
Document 0:::
Lövheim Cube of Emotion is a theoretical model for the relationship between the monoamine neurotransmitters serotonin, dopamine and noradrenaline and emotions. The model was presented in 2012 by Swedish researcher Hugo Lövheim.
Lövheim classifies emotions according to Silvan Tomkins, and orders the basic emotions in a three-dimensional coordinate system where the level of the monoamine neurotransmitters form orthogonal axes. The model is regarded as a dimensional model of emotion.
The main concepts of the hypothesis are that the monoamine neurotransmitters are orthogonal in essence, and the proposed one-to-one relationship between the monoamine neurotransmitters and emotions.
Document 1:::
Scientific studies have found that different brain areas show altered activity in humans with major depressive disorder (MDD), and this has encouraged advocates of various theories that seek to identify a biochemical origin of the disease, as opposed to theories that emphasize psychological or situational causes. Factors spanning these causative groups include nutritional deficiencies in magnesium, vitamin D, and tryptophan with situational origin but biological impact. Several theories concerning the biologically based cause of depression have been suggested over the years, including theories revolving around monoamine neurotransmitters, neuroplasticity, neurogenesis, inflammation and the circadian rhythm. Physical illnesses, including hypothyroidism and mitochondrial disease, can also trigger depressive symptoms.
Neural circuits implicated in depression include those involved in the generation and regulation of emotion, as well as in reward. Abnormalities are commonly found in the lateral prefrontal cortex whose putative function is generally considered to involve regulation of emotion. Regions involved in the generation of emotion and reward such as the amygdala, anterior cingulate cortex (ACC), orbitofrontal cortex (OFC), and striatum are frequently implicated as well. These regions are innervated by a monoaminergic nuclei, and tentative evidence suggests a potential role for abnormal monoaminergic activity.
Genetic factors
Difficulty of gene studies
Historically, candidate gene studies have been a major focus of study. However, as the number of genes reduces the likelihood of choosing a correct candidate gene, Type I errors (false positives) are highly likely. Candidate genes studies frequently possess a number of flaws, including frequent genotyping errors and being statistically underpowered. These effects are compounded by the usual assessment of genes without regard for gene-gene interactions. These limitations are reflected in the fact that no candid
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Cue reactivity is a type of learned response which is observed in individuals with an addiction and involves significant physiological and subjective reactions to presentations of drug-related stimuli (i.e., drug cues).
In investigations of these reactions in people with substance use disorders, changes in self-reported drug craving, physiological responses, and drug use are monitored as they are exposed to drug-related cues (e.g., cigarettes, bottles of alcohol, drug paraphernalia) or drug-neutral cues (e.g., pencils, glasses of water, a set of car keys).
Scientific theory
Cue reactivity is considered a risk factor for recovering addicts to relapse. There are two general types of cues: discrete which includes the substance itself and contextual which includes environments in which the substance is found. For example, for an alcoholic an alcoholic beverage would be a discrete cue and a bar would be a contextual cue. There are many different reactions to cues including withdrawal-like responses, opponent process responses, and substance-like responses.
A meta-analysis of 41 cue reactivity studies with people that have an alcohol, heroin, or cocaine addiction strongly supports the finding that people who have addictions have significant cue-specific reactions to drug-related stimuli. In general, these individuals, regardless of drug of abuse, report robust increases in craving and exhibit modest changes in autonomic responses, such as increases in heart rate and skin conductance and decreases in skin temperature, when exposed to drug-related versus neutral stimuli. Surprisingly, despite their obvious clinical relevance, drug use or drug-seeking behaviors are seldom measured in cue reactivity studies. However, when drug-use measures are used in cue reactivity studies the typical finding is a modest increase in drug-seeking or drug-use behavior.
Development
Clinical implications
Since people with substance use disorders are highly reactive to environmental cues pre
Document 4:::
Acute tryptophan depletion (ATD) is a technique used extensively to study the effect of low serotonin in the brain. This experimental approach reduces the availability of tryptophan, an amino acid which serves as the precursor to serotonin. The lack of mood-lowering effects after ATD in healthy subjects seems to contradict a direct causal relationship between acutely decreased serotonin levels and depression, although mood-lowering effects are observed in certain vulnerable individuals.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Changing levels of what substances partly explain emotional ups and downs in teens?
A. nutrients
B. hormones
C. enzymes
D. acids
Answer:
|
|
sciq-5650
|
multiple_choice
|
The majority of salamanders lack what organs, so respiration occurs through the skin or through external gills?
|
[
"throats",
"lungs",
"noses",
"mouths"
] |
B
|
Relavent Documents:
Document 0:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
Document 1:::
Myomeres are blocks of skeletal muscle tissue arranged in sequence, commonly found in aquatic chordates. Myomeres are separated from adjacent myomeres by connective fascia (myosepta) and most easily seen in larval fishes or in the olm. Myomere counts are sometimes used for identifying specimens, since their number corresponds to the number of vertebrae in the adults. Location varies, with some species containing these only near the tails, while some have them located near the scapular or pelvic girdles. Depending on the species, myomeres could be arranged in an epaxial or hypaxial manner. Hypaxial refers to ventral muscles and related structures while epaxial refers to more dorsal muscles. The horizontal septum divides these two regions in vertebrates from cyclostomes to gnathostomes. In terrestrial chordates, the myomeres become fused as well as indistinct, due to the disappearance of myosepta.
Shape
The shape of myomeres varies by species. Myomeres are commonly zig-zag, "V" (lancelets), "W" (fishes), or straight (tetrapods)– shaped muscle fibers. Generally, cyclostome myomeres are arranged in vertical strips while those of jawed fishes are folded in a complex matter due to swimming capability evolution. Specifically, myomeres of elasmobranchs and eels are “W”-shaped. Contrastingly, myomeres of tetrapods run vertically and do not display complex folding. Another species with simply-lain myomeres are mudpuppies. Myomeres overlap each other in succession, meaning myomere activation also allows neighboring myomeres to activate.
Myomeres are made up of myoglobin-rich dark muscle as well as white muscle. Dark muscle, generally, functions as slow-twitch muscle fibers while white muscle is composed of fast-twitch fibers.
Function
Specifically, three types of myomeres in fish-like chordates include amphioxine (lancelet), cyclostomine (jawless fish), and gnathostomine (jawed fish). A common function shared by all of these is that they function to flex the body lateral
Document 2:::
Fish gills are organs that allow fish to breathe underwater. Most fish exchange gases like oxygen and carbon dioxide using gills that are protected under gill covers (operculum) on both sides of the pharynx (throat). Gills are tissues that are like short threads, protein structures called filaments. These filaments have many functions including the transfer of ions and water, as well as the exchange of oxygen, carbon dioxide, acids and ammonia. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide.
Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. Within the gill filaments, capillary blood flows in the opposite direction to the water, causing counter-current exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Some fish, like sharks and lampreys, possess multiple gill openings. However, bony fish have a single gill opening on each side. This opening is hidden beneath a protective bony cover called the operculum.
Juvenile bichirs have external gills, a very primitive feature that they share with larval amphibians.
Previously, the evolution of gills was thought to have occurred through two diverging lines: gills formed from the endoderm, as seen in jawless fish species, or those form by the ectoderm, as seen in jawed fish. However, recent studies on gill formation of the little skate (Leucoraja erinacea) has shown potential evidence supporting the claim that gills from all current fish species have in fact evolved from a common ancestor.
Breathing with gills
Air breathing fish can be divided into obligate air breathers and facultative air breathers. Obligate air breathers, such as the African lungfish, are obligated to breathe air periodically or they suffocate. Facultative air breathers, such as the catfish Hypostomus plecostomus, only breathe air if they need to and can otherwise rely on their gills f
Document 3:::
Aquatic respiration is the process whereby an aquatic organism exchanges respiratory gases with water, obtaining oxygen from oxygen dissolved in water and excreting carbon dioxide and some other metabolic waste products into the water.
Unicellular and simple small organisms
In very small animals, plants and bacteria, simple diffusion of gaseous metabolites is sufficient for respiratory function and no special adaptations are found to aid respiration. Passive diffusion or active transport are also sufficient mechanisms for many larger aquatic animals such as many worms, jellyfish, sponges, bryozoans and similar organisms. In such cases, no specific respiratory organs or organelles are found.
Higher plants
Although higher plants typically use carbon dioxide and excrete oxygen during photosynthesis, they also respire and, particularly during darkness, many plants excrete carbon dioxide and require oxygen to maintain normal functions. In fully submerged aquatic higher plants specialised structures such as stoma on leaf surfaces to control gas interchange. In many species, these structures can be controlled to be open or closed depending on environmental conditions. In conditions of high light intensity and relatively high carbonate ion concentrations, oxygen may be produced in sufficient quantities to form gaseous bubbles on the surface of leaves and may produce oxygen super-saturation in the surrounding water body.
Animals
All animals that practice truly aquatic respiration are poikilothermic. All aquatic homeothermic animals and birds including cetaceans and penguins are air breathing despite a fully aquatic life-style.
Echinoderms
Echinoderms have a specialised water vascular system which provides a number of functions including providing the hydraulic power for tube feet but also serves to convey oxygenated sea water into the body and carry waste water out again. In many genera, the water enters through a madreporite, a sieve like structure on the upper surfac
Document 4:::
A rete mirabile (Latin for "wonderful net"; plural retia mirabilia) is a complex of arteries and veins lying very close to each other, found in some vertebrates, mainly warm-blooded ones. The rete mirabile utilizes countercurrent blood flow within the net (blood flowing in opposite directions) to act as a countercurrent exchanger. It exchanges heat, ions, or gases between vessel walls so that the two bloodstreams within the rete maintain a gradient with respect to temperature, or concentration of gases or solutes. This term was coined by Galen.
Effectiveness
The effectiveness of retia is primarily determined by how readily the heat, ions, or gases can be exchanged. For a given length, they are most effective with respect to gases or heat, then small ions, and decreasingly so with respect to other substances.
The retia can provide for extremely efficient exchanges. In bluefin tuna, for example, nearly all of the metabolic heat in the venous blood is transferred to the arterial blood, thus conserving muscle temperature; that heat exchange approaches 99% efficiency.
Birds
In birds with webbed feet, retia mirabilia in the legs and feet transfer heat from the outgoing (hot) blood in the arteries to the incoming (cold) blood in the veins. The effect of this biological heat exchanger is that the internal temperature of the feet is much closer to the ambient temperature, thus reducing heat loss. Penguins also have them in the flippers and nasal passages.
Seabirds distill seawater using countercurrent exchange in a so-called salt gland with a rete mirabile. The gland secretes highly concentrated brine stored near the nostrils above the beak. The bird then "sneezes" the brine out. As freshwater is not usually available in their environments, some seabirds, such as pelicans, petrels, albatrosses, gulls and terns, possess this gland, which allows them to drink the salty water from their environments while they are hundreds of miles away from land.
Fish
Fish have evolv
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The majority of salamanders lack what organs, so respiration occurs through the skin or through external gills?
A. throats
B. lungs
C. noses
D. mouths
Answer:
|
|
sciq-145
|
multiple_choice
|
When a series of measurements is precise but not what, the error is usually systematic?
|
[
"length",
"velocity",
"color",
"accurate"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests.
Events
There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science.
Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier.
The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version.
It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers.
General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy.
Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
Abbe error, named after Ernst Abbe, also called sine error, describes the magnification of angular error over distance. For example, when one measures a point that is 1 meter away at 45 degrees, an angular error of 1 degree corresponds to a positional error of over 1.745 cm, equivalent to a distance-measurement error of 1.745%.
In machine design, some components are particularly sensitive to angular errors. For example, slight deviations from parallelism of the spindle axis of a lathe to the tool motion along the bed of the machine can lead to relatively large (undesired) taper along the part (i.e. a non-cylindrical part). Vernier calipers are not free from abbe error, while screw gauges are free from abbe error. Abbe error is the product of the abbe offset and the sine of angular error in the system.
Abbe error can be detrimental to dead reckoning.
Formula:
the error.
the distance.
the angle.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When a series of measurements is precise but not what, the error is usually systematic?
A. length
B. velocity
C. color
D. accurate
Answer:
|
|
sciq-4224
|
multiple_choice
|
What anatomical structure serves as the conduit of the oocyte from the ovary to the uterus?
|
[
"uterine tubes",
"urethra",
"ureters",
"nephrons"
] |
A
|
Relavent Documents:
Document 0:::
The fallopian tubes, also known as uterine tubes, oviducts or salpinges (: salpinx), are paired tubes in the human female body that stretch from the uterus to the ovaries. The fallopian tubes are part of the female reproductive system. In other mammals they are only called oviducts.
Each tube is a muscular hollow organ that is on average between in length, with an external diameter of . It has four described parts: the intramural part, isthmus, ampulla, and infundibulum with associated fimbriae. Each tube has two openings a proximal opening nearest and opening to the uterus, and a distal opening furthest and opening to the abdomen. The fallopian tubes are held in place by the mesosalpinx, a part of the broad ligament mesentery that wraps around the tubes. Another part of the broad ligament, the mesovarium suspends the ovaries in place.
An egg cell is transported from an ovary to a fallopian tube where it may be fertilized in the ampulla of the tube. The fallopian tubes are lined with simple columnar epithelium with hairlike extensions called cilia which together with peristaltic contractions from the muscular layer, move the fertilized egg (zygote) along the tube. On its journey to the uterus the zygote undergoes cell divisions that changes it to a blastocyst an early embryo, in readiness for implantation.
Almost a third of cases of infertility are caused by fallopian tube pathologies. These include inflammation, and tubal obstructions. A number of tubal pathologies cause damage to the cilia of the tube which can impede movement of the sperm or egg.
The name comes from the Italian Catholic priest and anatomist Gabriele Falloppio, for whom other anatomical structures are also named.
Structure
Each fallopian tube leaves the uterus at an opening at the uterine horns known as the proximal tubal opening or proximal ostium. The tubes have an average length of that includes the intramural part of the tube. The tubes extend to near the ovaries where they open into
Document 1:::
The placenta of humans, and certain other mammals contains structures known as cotyledons, which transmit fetal blood and allow exchange of oxygen and nutrients with the maternal blood.
Ruminants
The Artiodactyla have a cotyledonary placenta. In this form of placenta the chorionic villi form a number of separate circular structures (cotyledons) which are distributed over the surface of the chorionic sac. Sheep, goats and cattle have between 72 and 125 cotyledons whereas deer have 4-6 larger cotyledons.
Human
The form of the human placenta is generally classified as a discoid placenta. Within this the cotyledons are the approximately 15-25 separations of the decidua basalis of the placenta, separated by placental septa. Each cotyledon consists of a main stem of a chorionic villus as well as its branches and sub-branches.
Vasculature
The cotyledons receive fetal blood from chorionic vessels, which branch off cotyledon vessels into the cotyledons, which, in turn, branch into capillaries. The cotyledons are surrounded by maternal blood, which can exchange oxygen and nutrients with the fetal blood in the capillaries.
Document 2:::
The larger ovarian follicles consist of an external fibrovascular coat, connected with the surrounding stroma of the ovary by a network of blood vessels, and an internal coat, which consists of several layers of nucleated cells, called the membrana granulosa. It contains numerous granulosa cells.
At one part of the mature follicle the cells of the membrana granulosa are collected into a mass which projects into the cavity of the follicle. This is termed the discus proligerus.
Document 3:::
The stroma of the ovary is a unique type of connective tissue abundantly supplied with blood vessels, consisting for the most part of spindle-shaped stroma cells. These appear similar to fibroblasts. The stroma also contains ordinary connective tissue such as reticular fibers and collagen. Ovarian stroma differs from typical connective tissue in that it contains a high number of cells. The stoma cells are distributed in such a way that the tissue appears to be whorled. Stromal cells associated with maturing follicles may acquire endocrine function and secrete estrogens. The entire ovarian stroma is highly vascular.
On the surface of the organ this tissue is much condensed, and forms a layer (tunica albuginea) composed of short connective-tissue fibers, with fusiform cells between them.
The stroma of the ovary may contain interstitial cells resembling those of the testis.
See also
stroma (disambiguation)
Stromal cell
Sex cord–gonadal stromal tumour
Document 4:::
The medulla of ovary (or Zona vasculosa of Waldeyer) is a highly vascular stroma in the center of the ovary. It forms from embryonic mesenchyme and contains blood vessels, lymphatic vessels, and nerves.
This stroma forms the tissue of the hilum by which the ovarian ligament is attached, and through which the blood vessels enter: it does not contain any ovarian follicles.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What anatomical structure serves as the conduit of the oocyte from the ovary to the uterus?
A. uterine tubes
B. urethra
C. ureters
D. nephrons
Answer:
|
|
sciq-10508
|
multiple_choice
|
What is the use of technology to change the genetic makeup of living things for human purposes?
|
[
"biological utility",
"genetic engineering",
"biological engineering",
"genetic employing"
] |
B
|
Relavent Documents:
Document 0:::
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms, cells, parts thereof and molecular analogues for products and services.
The term biotechnology was first used by Károly Ereky in 1919, to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances.
Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, creating new traits or modifying existing ones.
Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese.
The applications of biotechnology are diverse and have led to the development of essential products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites.
Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surroundin
Document 1:::
Genetically modified agriculture includes:
Genetically modified crops
Genetically modified livestock
Genetic engineering
Genetically modified organisms
Document 2:::
Biological engineering or
bioengineering is the application of principles of biology and the tools of engineering to create usable, tangible, economically viable products. Biological engineering employs knowledge and expertise from a number of pure and applied sciences, such as mass and heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials, renewable energy, ecological engineering, agricultural engineering, process engineering and catalysis, and other areas that improve the living standards of societies.
Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. Bioengineering overlaps substantially with biotechnology and the biomedical sciences in a way analogous to how various other forms of engineering and technology relate to various other sciences (such as aerospace engineering and other space technology to kinetics and astrophysics).
In general, biological engineers attempt to either mimic biological systems to create products, or to modify and control biological systems. Working with doctors, clinicians, and researchers, bioengineers use traditional engineering principles and techniques to address biological processes, including ways to replace, augment, sustain, or predict chemical and mechanical processes.
History
Biological engineering is a science-based discipline founded upon the biological sciences in the same way that chemical engineering, electrical engineering, and mechanical engineering can be based upon chemistry, electricity and magnetism, and classical mechanics, respectively.
Before WWII, biological engineering had begun being recognized as a
Document 3:::
Biotechnology is a technology based on biology, especially when used in agriculture, food science, and medicine.
Of the many different definitions available, the one formulated by the UN Convention on Biological Diversity is one of the broadest:
"Biotechnology means any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use." (Article 2. Use of Terms)
More about Biotechnology...
This page provides an alphabetical list of articles and other pages (including categories, lists, etc.) about biotechnology.
A
Agrobacterium -- Affymetrix -- Alcoholic beverages -- :Category:Alcoholic beverages -- Amgen -- Antibiotic -- Artificial selection
B
Biochemical engineering -- Biochip -- Biodiesel -- Bioengineering -- Biofuel -- Biogas -- Biogen Idec -- Bioindicator -- Bioinformatics -- :Category:Bioinformatics -- Bioleaching -- Biological agent -- Biological warfare -- Bioluminescence -- Biomimetics -- Bionanotechnology -- Bionics --Biopharmacology -- Biophotonics -- Bioreactor -- Bioremediation -- Biostimulation -- Biosynthesis -- Biotechnology -- :Category:Biotechnology -- :Category:Biotechnology companies -- :Category:Biotechnology products -- Bt corn
C
Cancer immunotherapy -- Cell therapy -- Chimera (genetics) -- Chinese hamster -- Chinese Hamster Ovary cell -- Chiron Corp. -- Cloning -- Compost -- Composting -- Convention on Biological Diversity -- Chromatography
D
Directive on the patentability of biotechnological inventions -- DNA microarray -- Dwarfing
E
Enzymes -- Electroporation -- Environmental biotechnology -- Eugenics
F
Fermentation -- :Category:Fermented foods
G
Gene knockout -- Gene therapy -- Genentech -- Genetic engineering -- Genetically modified crops --Genetically modified food -- Genetically modified food controversies -- Genetically modified organisms -- Genetics -- Genomics -- Genzyme -- Global Knowledge Center on Crop Biotechnology - Glycomic
Document 4:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the use of technology to change the genetic makeup of living things for human purposes?
A. biological utility
B. genetic engineering
C. biological engineering
D. genetic employing
Answer:
|
|
sciq-9747
|
multiple_choice
|
All cells need what for energy?
|
[
"light",
"insulin",
"oxygen",
"glucose"
] |
D
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All cells need what for energy?
A. light
B. insulin
C. oxygen
D. glucose
Answer:
|
|
scienceQA-5936
|
multiple_choice
|
Select the animal.
|
[
"Coconut trees have large, thin leaves.",
"Basil has green leaves.",
"Yaks eat plants.",
"Orange trees can grow fruit."
] |
C
|
An orange tree is a plant. It can grow fruit.
Orange trees grow in sunny, warm places. They can be damaged by cold weather.
Basil is a plant. It has green leaves.
Basil leaves are used in cooking.
A yak is an animal. It eats plants.
Yaks live in cold places. Their long hair helps keep them warm.
A coconut tree is a plant. It has large, thin leaves.
Coconut trees grow in warm, rainy places.
|
Relavent Documents:
Document 0:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
Document 1:::
Arboreal locomotion is the locomotion of animals in trees. In habitats in which trees are present, animals have evolved to move in them. Some animals may scale trees only occasionally, but others are exclusively arboreal. The habitats pose numerous mechanical challenges to animals moving through them and lead to a variety of anatomical, behavioral and ecological consequences as well as variations throughout different species. Furthermore, many of these same principles may be applied to climbing without trees, such as on rock piles or mountains.
Some animals are exclusively arboreal in habitat, such as the tree snail.
Biomechanics
Arboreal habitats pose numerous mechanical challenges to animals moving in them, which have been solved in diverse ways. These challenges include moving on narrow branches, moving up and down inclines, balancing, crossing gaps, and dealing with obstructions.
Diameter
Moving along narrow surfaces, such as a branch of a tree, can create special difficulties for animals who are not adapted to deal with balancing on small diameter substrates. During locomotion on the ground, the location of the center of mass may swing from side to side. But during arboreal locomotion, this would result in the center of mass moving beyond the edge of the branch, resulting in a tendency to topple over and fall. Not only do some arboreal animals have to be able to move on branches of varying diameter, but they also have to eat on these branches, resulting in the need for the ability to balance while using their hands to feed themselves. This resulted in various types of grasping such as pedal grasping in order to clamp themselves onto small branches for better balance.
Incline
Branches are frequently oriented at an angle to gravity in arboreal habitats, including being vertical, which poses special problems. As an animal moves up an inclined branch, it must fight the force of gravity to raise its body, making the movement more difficult. To get past thi
Document 2:::
In botany, a virtual herbarium is a herbarium in a digitized form. That is, it concerns a collection of digital images of preserved plants or plant parts. Virtual herbaria often are established to improve availability of specimens to a wider audience. However, there are digital herbaria that are not suitable for internet access because of the high resolution of scans and resulting large file sizes (several hundred megabytes per file). Additional information about each specimen, such as the location, the collector, and the botanical name are attached to every specimen. Frequently, further details such as related species and growth requirements are mentioned.
Specimen imaging
The standard hardware used for herbarium specimen imaging is the "HerbScan" scanner. It is an inverted flat-bed scanner which raises the specimen up to the scanning surface. This technology was developed because it is standard practice to never turn a herbarium specimen upside-down. Alternatively, some herbaria employ a flat-bed book scanner or a copy stand to achieve the same effect.
A small color chart and a ruler must be included on a herbarium sheet when it is imaged. The JSTOR Plant Science requires that the ruler bears the herbarium name and logo, and that a ColorChecker chart is used for any specimens to be contributed to the Global Plants Initiative (GPI).
Uses
Virtual herbaria are established in part to increase the longevity of specimens. Major herbaria participate in international loan programs, where a researcher can request specimens to be shipped in for study. This shipping contributes to the wear and tear of specimens. If, however, digital images are available, images of the specimens can be sent electronically. These images may be a sufficient substitute for the specimens themselves, or alternatively, the researcher can use the images to "preview" the specimens, to which ones should be sent out for further study. This process cuts down on the shipping, and thus the wear and
Document 3:::
Chard or Swiss chard (; Beta vulgaris subsp. vulgaris, Cicla Group and Flavescens Group) is a green leafy vegetable. In the cultivars of the Flavescens Group, the leaf stalks are large and often prepared separately from the leaf blade; the Cicla Group is the leafy spinach beet. The leaf blade can be green or reddish; the leaf stalks are usually white, yellow or red.
Chard, like other green leafy vegetables, has highly nutritious leaves. Chard has been used in cooking for centuries, but because it is the same species as beetroot, the common names that cooks and cultures have used for chard may be confusing; it has many common names, such as silver beet, perpetual spinach, beet spinach, seakale beet, or leaf beet.
Classification
Chard was first described in 1753 by Carl Linnaeus as Beta vulgaris var. cicla. Its taxonomic rank has changed many times: it has been treated as a subspecies, a convariety, and a variety of Beta vulgaris. (Among the numerous synonyms for it are Beta vulgaris subsp. cicla (L.) W.D.J. Koch (Cicla Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. cicla L., B. vulgaris var. cycla (L.) Ulrich, B. vulgaris subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Spinach Beet Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch (Flavescens Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. flavescens (Lam.) DC., B. vulgaris L. subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Swiss Chard Group)). The accepted name for all beet cultivars, like chard, sugar beet and beetroot, is Beta vulgaris subsp. vulgaris. They are cultivated descendants of the sea beet, Beta vulgaris subsp. maritima. Chard belongs to the chenopods, which are now mostly included in the family Amaranthaceae (sensu lato).
Document 4:::
Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995).
Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994).
History of the study of plant tolerance
Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the animal.
A. Coconut trees have large, thin leaves.
B. Basil has green leaves.
C. Yaks eat plants.
D. Orange trees can grow fruit.
Answer:
|
sciq-10498
|
multiple_choice
|
Kinetic energy of moving particles of matter, measured by their temperatures are known as:
|
[
"thermal energy",
"solar energy",
"visible energy",
"atmospheric energy"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 2:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 3:::
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t
Document 4:::
Specific kinetic energy is the kinetic energy of an object per unit of mass.
It is defined as .
Where is the specific kinetic energy and is velocity. It has units of J/kg, which is equivalent to m2/s2.
Energy (physics)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Kinetic energy of moving particles of matter, measured by their temperatures are known as:
A. thermal energy
B. solar energy
C. visible energy
D. atmospheric energy
Answer:
|
|
sciq-1561
|
multiple_choice
|
What contain organelles common to other cells, such as a nucleus and mitochondria, and also have more specialized structures, including dendrites and axons?
|
[
"neurons",
"blood cells",
"muscle cells",
"follicles"
] |
A
|
Relavent Documents:
Document 0:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 3:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 4:::
Vertebrates
Tendon cells, or tenocytes, are elongated fibroblast type cells. The cytoplasm is stretched between the collagen fibres of the tendon. They have a central cell nucleus with a prominent nucleolus. Tendon cells have a well-developed rough endoplasmic reticulum and they are responsible for synthesis and turnover of tendon fibres and ground substance.
Invertebrates
Tendon cells form a connecting epithelial layer between the muscle and shell in molluscs. In gastropods, for example, the retractor muscles connect to the shell via tendon cells. Muscle cells are attached to the collagenous myo-tendon space via hemidesmosomes. The myo-tendon space is then attached to the base of the tendon cells via basal hemidesmosomes, while apical hemidesmosomes, which sit atop microvilli, attach the tendon cells to a thin layer of collagen. This is in turn attached to the shell via organic fibres which insert into the shell. Molluscan tendon cells appear columnar and contain a large basal cell nucleus. The cytoplasm is filled with granular endoplasmic reticulum and sparse golgi. Dense bundles of microfilaments run the length of the cell connecting the basal to the apical hemidesmosomes.
See also
List of human cell types derived from the germ layers
List of distinct cell types in the adult human body
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What contain organelles common to other cells, such as a nucleus and mitochondria, and also have more specialized structures, including dendrites and axons?
A. neurons
B. blood cells
C. muscle cells
D. follicles
Answer:
|
|
sciq-3703
|
multiple_choice
|
What effect is caused by air moving over the earth's surface as it spins?
|
[
"dopple effect",
"pruett effect",
"mazinho effect",
"coriolis effect"
] |
D
|
Relavent Documents:
Document 0:::
Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well.
The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance.
Atmospheric motions
Coriolis force
When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as:
where
is the flow velocity
is the planet's angular velocity vector
This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotat
Document 1:::
Spindrift (more rarely spoondrift) is the spray blown from cresting waves during a gale. This spray, which "drifts" in the direction of the gale, is one of the characteristics of a wind speed of 8 Beaufort and higher at sea. In Greek and Roman mythology, Leucothea was the goddess of spindrift.
Terminology
Spindrift is derived from the Scots language, but its further etymology is uncertain. Although the Oxford English Dictionary suggests it is a variant of spoondrift based on the way that word was pronounced in southwest Scotland, from spoon or spoom ("to sail briskly with the wind astern, with or without sails hoisted") and drift ("a mass of matter driven or forced onward together in a body, etc., especially by wind or water"), this is doubted by the because spoondrift is attested later than spindrift and it seems unlikely that the Scots spelling would have superseded the English one, and because the early use of the word in the form spenedrift by James Melville (1556–1614) is unlikely to have derived from spoondrift. In any case, spindrift was popularized in England through its use in the novels of the Scottish-born author William Black (1841–1898).
In the 1940s U.S. Navy, spindrift and spoondrift appear to have been used for different phenomena, as in the following record by the captain of the : "Visibility – which had been fair on the surface after moonrise – was now exceedingly poor due to spoondrift. Would that it were only the windblown froth of spindrift rather than the wind-driven cloudburst of water lashing the periscope exit eyepiece."
Spindrift or spoondrift is also used to refer to fine sand or snow that is blown off the ground by the wind.
Document 2:::
In fluid dynamics, a secondary circulation or secondary flow is a weak circulation that plays a key maintenance role in sustaining a stronger primary circulation that contains most of the kinetic energy and momentum of a flow. For example, a tropical cyclone's primary winds are tangential (horizontally swirling), but its evolution and maintenance against friction involves an in-up-out secondary circulation flow that is also important to its clouds and rain. On a planetary scale, Earth's winds are mostly east–west or zonal, but that flow is maintained against friction by the Coriolis force acting on a small north–south or meridional secondary circulation.
See also
Hough function
Primitive equations
Secondary flow
Document 3:::
Polar motion of the Earth is the motion of the Earth's rotational axis relative to its crust. This is measured with respect to a reference frame in which the solid Earth is fixed (a so-called Earth-centered, Earth-fixed or ECEF reference frame). This variation is a few meters on the surface of the Earth.
Analysis
Polar motion is defined relative to a conventionally defined reference axis, the CIO (Conventional International Origin), being the pole's average location over the year 1900. It consists of three major components: a free oscillation called Chandler wobble with a period of about 435 days, an annual oscillation, and an irregular drift in the direction of the 80th meridian west, which has lately been less extremely west.
Causes
The slow drift, about 20 m since 1900, is partly due to motions in the Earth's core and mantle, and partly to the redistribution of water mass as the Greenland ice sheet melts, and to isostatic rebound, i.e. the slow rise of land that was formerly burdened with ice sheets or glaciers. The drift is roughly along the 80th meridian west. Since about 2000, the pole has found a less extreme drift, which is roughly along the central meridian. This less dramatically westward drift of motion is attributed to the global scale mass transport between the oceans and the continents.
Major earthquakes cause abrupt polar motion by altering the volume distribution of the Earth's solid mass. These shifts are quite small in magnitude relative to the long-term core/mantle and isostatic rebound components of polar motion.
Principle
In the absence of external torques, the vector of the angular momentum M of a rotating system remains constant and is directed toward a fixed point in space. If the earth were perfectly symmetrical and rigid, M would remain aligned with its axis of symmetry, which would also be its axis of rotation. In the case of the Earth, it is almost identical with its axis of rotation, with the discrepancy due to shifts of mass on the
Document 4:::
Cyclonic rotation, or cyclonic circulation, is atmospheric motion in the same direction as a planet's rotation, as opposed to anticyclonic rotation. In the case of Earth's rotation, the Coriolis effect causes cyclonic rotation to be in a counterclockwise direction in the Northern Hemisphere and clockwise in the Southern Hemisphere. A closed area of winds rotating cyclonically is known as a cyclone.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What effect is caused by air moving over the earth's surface as it spins?
A. dopple effect
B. pruett effect
C. mazinho effect
D. coriolis effect
Answer:
|
|
sciq-10843
|
multiple_choice
|
What man-made structures orbit all of the inner planets as well as jupiter and saturn?
|
[
"moons",
"comets",
"space shuttles",
"satellites"
] |
D
|
Relavent Documents:
Document 0:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 1:::
The Sweden Solar System is the world's largest permanent scale model of the Solar System. The Sun is represented by the Avicii Arena in Stockholm, the second-largest hemispherical building in the world. The inner planets can also be found in Stockholm but the outer planets are situated northward in other cities along the Baltic Sea. The system was started by Nils Brenning, professor at the Royal Institute of Technology in Stockholm, and Gösta Gahm, professor at the Stockholm University. The model represents the Solar System on the scale of 1:20 million.
The system
The bodies represented in this model include the Sun, the planets (and some of their moons), dwarf planets and many types of small bodies (comets, asteroids, trans-Neptunians, etc.), as well as some abstract concepts (like the Termination Shock zone). Because of the existence of many small bodies in the real Solar System, the model can always be further increased.
The Sun is represented by the Avicii Arena (Globen), Stockholm, which is the second-largest hemispherical building in the world, in diameter. To respect the scale, the globe represents the Sun including its corona.
Inner planets
Mercury ( in diameter) is placed at Stockholm City Museum, from the Globe. The small metallic sphere was built by the artist Peter Varhelyi.
Venus ( in diameter) is placed at Vetenskapens Hus at KTH (Royal Institute of Technology), from the Globe. The previous model, made by the United States artist Daniel Oberti, was inaugurated on 8 June 2004, during a Venus transit and placed at KTH. It fell and shattered around 11 June 2011. Due to construction work at the location of the previous model of Venus it was removed and as of October 2012 cannot be seen. The current model now at Vetenskapens Hus was previously located at the Observatory Museum in Stockholm (now closed).
Earth ( in diameter) is located at the Swedish Museum of Natural History (Cosmonova), from the Globe. Satellite images of the Earth are exhibited
Document 2:::
The moons of Saturn are numerous and diverse, ranging from tiny moonlets only tens of meters across to the enormous Titan, which is larger than the planet Mercury. There are 146 moons with confirmed orbits. This number does not include the many thousands of moonlets embedded within Saturn's dense rings, nor hundreds of possible kilometer-sized distant moons that were seen through telescopes but not recaptured. Seven Saturnian moons are large enough to have collapsed into a relaxed, ellipsoidal shape, though only one or two of those, Titan and possibly Rhea, are currently in hydrostatic equilibrium. Three moons are particularly notable. Titan is the second-largest moon in the Solar System (after Jupiter's Ganymede), with a nitrogen-rich Earth-like atmosphere and a landscape featuring river networks and hydrocarbon lakes. Enceladus emits jets of ice from its south-polar region and is covered in a deep layer of snow. Iapetus has contrasting black and white hemispheres as well as an extensive ridge of equatorial mountains among the tallest in the solar system.
Of the known moons, 24 are regular satellites; they have prograde orbits not greatly inclined to Saturn's equatorial plane. They include the seven major satellites, four small moons that exist in a trojan orbit with larger moons, two mutually co-orbital moons, and two moons that act as shepherds of Saturn's narrow F Ring. Two other known regular satellites orbit within gaps in Saturn's rings. The relatively large Hyperion is locked in an orbital resonance with Titan. The remaining regular moons orbit near the outer edge of the dense A Ring, within the diffuse G Ring, and between the major moons Mimas and Enceladus. The regular satellites are traditionally named after Titans and Titanesses or other figures associated with the mythological Saturn.
The remaining 122, with mean diameters ranging from , are irregular satellites, whose orbits are much farther from Saturn, have high inclinations, and are mixed between
Document 3:::
This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun.
Star
The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System.
Planets
In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless.
According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System.
Dwarf planets
Dwarf planets are bodies orbiting the Sun that are massive and warm eno
Document 4:::
The interstellar space opera epic Star Wars uses science and technology in its settings and storylines. The series has showcased many technological concepts, both in the movies and in the expanded universe of novels, comics and other forms of media. The Star Wars movies' primary objective is to build upon drama, philosophy, political science and less on scientific knowledge. Many of the on-screen technologies created or borrowed for the Star Wars universe were used mainly as plot devices.
The iconic status that Star Wars has gained in popular culture and science fiction allows it to be used as an accessible introduction to real scientific concepts. Many of the features or technologies used in the Star Wars universe are not yet considered possible. Despite this, their concepts are still probable.
Tatooine's twin stars
In the past, scientists thought that planets would be unlikely to form around binary stars. However, recent simulations indicate that planets are just as likely to form around binary star systems as single-star systems. Of the 3457 exoplanets currently known, 146 actually orbit binary star systems (and 39 orbit multiple star systems with three or more stars). Specifically, they orbit what are known as "wide" binary star systems where the two stars are fairly far apart (several AU). Tatooine appears to be of the other type — a "close" binary, where the stars are very close, and the planets orbit their common center of mass.
The first observationally confirmed binary — Kepler-16b — is a close binary. Exoplanet researchers' simulations indicate that planets form frequently around close binaries, though gravitational effects from the dual star system tend to make them very difficult to find with current Doppler and transit methods of planetary searches. In studies looking for dusty disks—where planet formation is likely—around binary stars, such disks were found in wide or narrow binaries, or those whose stars are more than 50 or less than 3 AU apart, r
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What man-made structures orbit all of the inner planets as well as jupiter and saturn?
A. moons
B. comets
C. space shuttles
D. satellites
Answer:
|
|
sciq-11246
|
multiple_choice
|
What bodily defense can be acquired in an active or passive way, and can be natural or artificial?
|
[
"membrane",
"skin",
"nerves",
"immunity"
] |
D
|
Relavent Documents:
Document 0:::
The innate, or nonspecific, immune system is one of the two main immunity strategies (the other being the adaptive immune system) in vertebrates. The innate immune system is an alternate defense strategy and is the dominant immune system response found in plants, fungi, insects, and primitive multicellular organisms (see Beyond vertebrates).
The major functions of the innate immune system are to:
recruit immune cells to infection sites by producing chemical factors, including chemical mediators called cytokines
activate the complement cascade to identify bacteria, activate cells, and promote clearance of antibody complexes or dead cells
identify and remove foreign substances present in organs, tissues, blood and lymph, by specialized white blood cells
activate the adaptive immune system through antigen presentation
act as a physical and chemical barrier to infectious agents; via physical measures such as skin and chemical measures such as clotting factors in blood, which are released following a contusion or other injury that breaks through the first-line physical barrier (not to be confused with a second-line physical or chemical barrier, such as the blood–brain barrier, which protects the nervous system from pathogens that have already gained access to the host).
Anatomical barriers
Anatomical barriers include physical, chemical and biological barriers. The epithelial surfaces form a physical barrier that is impermeable to most infectious agents, acting as the first line of defense against invading organisms. Desquamation (shedding) of skin epithelium also helps remove bacteria and other infectious agents that have adhered to the epithelial surface. Lack of blood vessels, the inability of the epidermis to retain moisture, and the presence of sebaceous glands in the dermis, produces an environment unsuitable for the survival of microbes. In the gastrointestinal and respiratory tract, movement due to peristalsis or cilia, respectively, helps remove infectious
Document 1:::
The following outline is provided as an overview of and topical guide to immunology:
Immunology – study of all aspects of the immune system in all organisms. It deals with the physiological functioning of the immune system in states of both health and disease; malfunctions of the immune system in immunological disorders (autoimmune diseases, hypersensitivities, immune deficiency, transplant rejection); the physical, chemical and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo.
Essence of immunology
Immunology
Branch of Biomedical science
Immune system
Immunity
Branches of immunology:
1. General Immunology
2. Basic Immunology
3. Advanced Immunology
4. Medical Immunology
5. Pharmaceutical Immunology
9. Clinical Immunology
6. Environmental Immunology
8. Cellular and Molecular Immunology
9. Food and Agricultural Immunology
Classical immunology
Clinical immunology
Computational immunology
Diagnostic immunology
Evolutionary immunology
Systems immunology
Immunomics
Immunoproteomics
Immunophysics
Immunochemistry
Ecoimmunology
Immunopathology
Nutritional immunology
Psychoneuroimmunology
Reproductive immunology
Circadian immunology
Immunotoxicology
Palaeoimmunology
Tissue-based immunology
Testicular immunology - Testes
Immunodermatology - Skin
Intravascular immunology - Blood
Osteoimmunology - Bone
Mucosal immunology - Mucosal surfaces
Respiratory tract antimicrobial defense system - Respiratory tract
Neuroimmunology - Neuroimmune system in the Central nervous system
Ocularimmunology - Ocular immune system in the Eye
Cancer immunology/Immunooncology - Tumors
History of immunology
History of immunology
Timeline of immunology
General immunological concepts
Immunity:
Immunity against:
Pathogens
Pathogenic bacteria
Viruses
Fungi
Protozoa
Parasites
Tumors
Allergens
Self-proteins
Autoimmunity
Alloimmunity
Cross-reactivity
Tolerance
Central tolerance
Peripheral tolerance
Document 2:::
A defense wound or self-defense wound is an injury received by the victim of an attack while trying to defend against the assailant(s). Defensive wounds are often found on the hands and forearms if a victim raised them to protect the head and face or to fend off an assault, but may also be present on the feet and legs if a victim attempted defense while lying down and kicking out at an assailant.
The appearance and nature of the wound varies with the type of weapon used and the location of the injury, and may present as a laceration, abrasion, contusion or bone fracture. Where a victim has time to raise hands or arms before being shot by an assailant, the injury may also present as a gunshot wound. Severe laceration of the palmar surface of the hand or partial amputation of fingers may result from the victim grasping the blade of a weapon during an attack. In forensic pathology the presence of defense wounds is highly indicative of homicide and also proves that the victim was, at least initially, conscious and able to offer some resistance during the attack.
Defense wounds may be classified as active or passive. A victim of a knife attack, for example, would receive active defense wounds from grasping at the knife's blade, and passive defense wounds on the back of the hand if it was raised up to protect the face.
Document 3:::
Trained immunity is a long-term functional modification of cells in the innate immune system which leads to an altered response to a second unrelated challenge. For example, the BCG vaccine leads to a reduction in childhood mortality caused by unrelated infectious agents. The term "innate immune memory" is sometimes used as a synonym for the term trained immunity which was first coined by Mihai Netea in 2011. The term "trained immunity" is relatively new – immunological memory has previously been considered only as a part of adaptive immunity – and refers only to changes in innate immune memory of vertebrates. This type of immunity is thought to be largely mediated by epigenetic modifications. The changes to the innate immune response may last up to several months, in contrast to the classical immunological memory (which may last up to a lifetime), and is usually unspecific because there is no production of specific antibodies/receptors. Trained immunity has been suggested to possess a transgenerational effect, for example the children of mothers who had also received vaccination against BCG had a lower mortality rate than children of unvaccinated mothers. The BRACE trial is currently assessing if BCG vaccination can reduce the impact of COVID-19 in healthcare workers. Other vaccines are also thought to induce immune training such as the DTPw vaccine.
Immune cells subject to training
Trained immunity is thought to be largely mediated by functional reprogramming of myeloid cells. One of the first described adaptive changes in macrophages were associated with lipopolysaccharide tolerance, which resulted in the silencing of inflammatory genes. Similarly, Candida albicans and fungal β-glucan trigger changes in monocyte histone methylation, this functional reprogramming eventually provides protection against reinfection. Also, a non-specific manner of protection in training with different microbial ligands was showed, for example treatment with fungal β-glucan induced
Document 4:::
Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants. Injuries can be caused in many ways, such as mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury.
Taxonomic range
Animals
Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors.
Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent.
Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury.
Humans
Injury in humans has been studied extensively for its importance in medicine. Much of medical practice including emergency medicine and pain management is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What bodily defense can be acquired in an active or passive way, and can be natural or artificial?
A. membrane
B. skin
C. nerves
D. immunity
Answer:
|
|
sciq-6524
|
multiple_choice
|
Dew point is the temperature at which what occurs?
|
[
"combustion",
"fermentation",
"condensation",
"precipitation"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The dew point of a given body of air is the temperature to which it must be cooled to become saturated with water vapor. This temperature depends on the pressure and water content of the air. When the air is cooled below the dew point, its moisture capacity is reduced and airborne water vapor will condense to form liquid water known as dew. When this occurs through the air's contact with a colder surface, dew will form on that surface.
The dew point is affected by the air's humidity. The more moisture the air contains, the higher its dew point.
When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed via deposition rather than condensation.
In liquids, the analog to the dew point is the cloud point.
Humidity
If all the other factors influencing humidity remain constant, at ground level the relative humidity rises as the temperature falls; this is because less vapor is needed to saturate the air. In normal conditions, the dew point temperature will not be greater than the air temperature, since relative humidity typically does not exceed 100%.
In technical terms, the dew point is the temperature at which the water vapor in a sample of air at constant barometric pressure condenses into liquid water at the same rate at which it evaporates. At temperatures below the dew point, the rate of condensation will be greater than that of evaporation, forming more liquid water. The condensed water is called dew when it forms on a solid surface, or frost if it freezes. In the air, the condensed water is called either fog or a cloud, depending on its altitude when it forms. If the temperature is below the dew point, and no dew or fog forms, the vapor is called supersaturated. This can happen if there are not enough particles in the air to act as condensation nuclei.
The dew point depends on how much water vapor the air contains. If the air is very dry and has few water molecules, the dew point is low and surface
Document 2:::
The hydrocarbon dew point is the temperature (at a given pressure) at which the hydrocarbon components of any hydrocarbon-rich gas mixture, such as natural gas, will start to condense out of the gaseous phase. It is often also referred to as the HDP or the HCDP. The maximum temperature at which such condensation takes place is called the cricondentherm. The hydrocarbon dew point is a function of the gas composition as well as the pressure.
The hydrocarbon dew point is universally used in the natural gas industry as an important quality parameter, stipulated in contractual specifications and enforced throughout the natural gas supply chain, from producers through processing, transmission and distribution companies to final end users.
The hydrocarbon dew point of a gas is a different concept from the water dew point, the latter being the temperature (at a given pressure) at which water vapor present in a gas mixture will condense out of the gas.
Relation to the term GPM
In the United States, the hydrocarbon dew point of processed, pipelined natural gas is related to and characterized by the term GPM which is the gallons of liquefiable hydrocarbons contained in of natural gas at a stated temperature and pressure. When the liquefiable hydrocarbons are characterized as being hexane or higher molecular weight components, they are reported as GPM (C6+).
However, the quality of raw produced natural gas is also often characterized by the term GPM meaning the gallons of liquefiable hydrocarbons contained in of the raw natural gas. In such cases, when the liquefiable hydrocarbons in the raw natural gas are characterized as being ethane or higher molecular weight components, they are reported as GPM (C2+). Similarly, when characterized as being propane or higher molecular weight components, they are reported as GPM (C3+).
Care must be taken not to confuse the two different definitions of the term GPM.
Although GPM is an additional parameter of some value, most pipeli
Document 3:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 4:::
Critical point
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Dew point is the temperature at which what occurs?
A. combustion
B. fermentation
C. condensation
D. precipitation
Answer:
|
|
sciq-9596
|
multiple_choice
|
What molecule is made of two hydrogen atoms and one oxygen atom?
|
[
"lipids",
"water",
"carbohydrate",
"Oxygen"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 3:::
Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2.
In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element.
Classifications
Based on atomicity, molecules can be classified as:
Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic.
Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic.
Triatomic (composed of three atoms). Examples include O3 (ozone).
Polyatomic (composed of three or more atoms). Examples include S8.
Atomicity may vary in different allotropes of the same element.
The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1.
The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2).
Examples
The most common values of atomicity for the first 30 elements in the periodic table are as follows:
Document 4:::
In chemistry, a dihydrogen bond is a kind of hydrogen bond, an interaction between a metal hydride bond and an OH or NH group or other proton donor. With a van der Waals radius of 1.2 Å, hydrogen atoms do not usually approach other hydrogen atoms closer than 2.4 Å. Close approaches near 1.8 Å, are, however, characteristic of dihydrogen bonding.
Boron hydrides
An early example of this phenomenon is credited to Brown and Heseltine. They observed intense absorptions in the IR bands at 3300 and 3210 cm−1 for a solution of (CH3)2NHBH3. The higher energy band is assigned to a normal N−H vibration whereas the lower energy band is assigned to the same bond, which is interacting with the B−H. Upon dilution of the solution, the 3300 cm−1 band increased in intensity and the 3210 cm−1 band decreased, indicative of intermolecular association.
Interest in dihydrogen bonding was reignited upon the crystallographic characterization of the molecule H3NBH3. In this molecule, like the one studied by Brown and Hazeltine, the hydrogen atoms on nitrogen have a partial positive charge, denoted Hδ+, and the hydrogen atoms on boron have a partial negative charge, often denoted Hδ−. In other words, the amine is a protic acid and the borane end is hydridic. The resulting B−H...H−N attractions stabilize the molecule as a solid. In contrast, the related substance ethane, H3CCH3, is a gas with a boiling point 285 °C lower. Because two hydrogen centers are involved, the interaction is termed a dihydrogen bond. Formation of a dihydrogen bond is assumed to precede formation of H2 from the reaction of a hydride and a protic acid. A very short dihydrogen bond is observed in NaBH4·2H2O with H−H contacts of 1.79, 1.86, and 1.94 Å.
Coordination chemistry
Protonation of transition metal hydride complexes is generally thought to occur via dihydrogen bonding. This kind of H−H interaction is distinct from the H−H bonding interaction in transition metal complexes having dihydrogen bound to a meta
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What molecule is made of two hydrogen atoms and one oxygen atom?
A. lipids
B. water
C. carbohydrate
D. Oxygen
Answer:
|
|
sciq-7571
|
multiple_choice
|
What is happening to the rate of the expansion of the universe?
|
[
"it is unknown",
"it is increasing",
"it is decreasing",
"it is stable"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is happening to the rate of the expansion of the universe?
A. it is unknown
B. it is increasing
C. it is decreasing
D. it is stable
Answer:
|
|
sciq-443
|
multiple_choice
|
What is the name of the second most electronegative element?
|
[
"carbon",
"oxygen",
"Hydrogen",
"nitrogen"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 2:::
In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts.
In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects.
In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae.
General chemistry
In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism.
The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture.
Analytical chemistry
In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which
have soluble chlorides; and
are not precipitated
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of the second most electronegative element?
A. carbon
B. oxygen
C. Hydrogen
D. nitrogen
Answer:
|
|
sciq-6497
|
multiple_choice
|
When members of the same species compete for the same resources, it is called what?
|
[
"interspecies competition",
"natural selection",
"intraspecific competition",
"extinction"
] |
C
|
Relavent Documents:
Document 0:::
Interspecific competition, in ecology, is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition.
If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food.
Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition.
Types
All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric).
Based on mechanism
Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availab
Document 1:::
Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other.
In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time).
There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition.
According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis.
Interference competition
During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites.
Document 2:::
Any action or influence that species have on each other is considered a biological interaction. These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which identifies the members and the patterns that connect them. Species interactions are considered primarily in terms of trophic interactions, which depict which species feed on others.
Currently, ecological networks that integrate non-trophic interactions are being built. The type of interactions they can contain can be classified into six categories: mutualism, commensalism, neutralism, amensalism, antagonism, and competition.
Observing and estimating the fitness costs and benefits of species interactions can be very problematic. The way interactions are interpreted can profoundly affect the ensuing conclusions.
Interaction characteristics
Characterization of interactions can be made according to various measures, or any combination of them.
Prevalence
Prevalence identifies the proportion of the population affected by a given interaction, and thus quantifies whether it is relatively rare or common. Generally, only common interactions are considered.
Negative/ Positive
Whether the interaction is beneficial or harmful to the species involved determines the sign of the interaction, and what type of interaction it is classified as. To establish whether they are harmful or beneficial, careful observational and/or experimental studies can be conducted, in an attempt to establish the cost/benefit balance experienced by the members.
Strength
The sign of an interaction does not capture the impact on fitness of that interaction. One example of this is of antagonism, in which predators may have a much stronger impact on their prey species (death), than parasites (reduction in fitness). Similarly, positive interactions can produce anything from a negligible change in fitness to a life or death impact.
Relationship in space and time
The rel
Document 3:::
In ecological theory, the Hutchinson's ratio is the ratio of the size differences between similar species when they are living together as compared to when they are isolated. It is named after G. Evelyn Hutchinson who concluded that various key attributes in species varied according to the ratio of 1:1.1 to 1:1.4. The mean ratio 1.3 can be interpreted as the amount of separation necessary to obtain coexistence of species at the same trophic level.
The variation in trophic structures of sympatric congeneric species is presumed to lead to niche differentiation, and allowing coexistence of multiple similar species in the same habitat by the partitioning of food resources. Hutchinson concluded that this size ratio could be used as an indicator of the kind of difference necessary to permit two species to co-occur in different niches but at the same level of the food web. The rule's legitimacy has been questioned, as other categories of objects also exhibit size ratios of roughly 1.3.
Studies done on interspecific competition and niche changes in Tits (Parus spp.) show that when there are multiple species in the same community there is an expected change in foraging when they are of similar size (size ratio 1-1.2). There was no change found among the less similar species. In this paper this was strong evidence for niche differentiation for interspecific competition, and would also be a good argument for Hutchinson's rule.
The simplest and perhaps the most effective way to differentiate the ecological niches of coexisting species is their morphological differentiation (in particular, size differentiation).
Hutchinson showed that the average body size ratio in species of the same genus that belong to the same community and use the same resource is about 1.3 (from 1.1 to 1.4) and the respective body weight ratio is 2. This empirical pattern tells us that this rule does not apply to all organisms and ecological situations. And, therefore, it would be of particular
Document 4:::
Ecology: From Individuals to Ecosystems is a 2006 higher education textbook on general ecology written by Michael Begon, Colin R. Townsend and John L. Harper. Published by Blackwell Publishing, it is now in its fourth edition. The first three editions were published by Blackwell Science under the title Ecology: Individuals, Populations and Communities. Since it first became available it has had a positive reception, and has long been one of the leading textbooks on ecology.
Background and history
The book is written by Michael Begon of the University of Liverpool's School of Biosciences, Colin Townsend, from the Department of Zoology of New Zealand's University of Otago, and the University of Exeter's John L. Harper. The first edition was published in 1986. This was followed in 1990 with a second edition. The third edition became available in 1996. The most recent edition appeared in 2006 under the new subtitle From Individuals to Ecosystems.
One of the book's authors, John L. Harper, is now deceased. The fourth edition cover is an image of a mural on a Wellington street created by Christopher Meech and a group of urban artists to generate thought about the topic of environmental degradation. It reads "we did not inherit the earth from our ancestors, we borrowed it from our children."
Contents
Part 1. ORGANISMS
1. Organisms in their environments: the evolutionary backdrop
2. Conditions
3. Resources
4. Life, death and life histories
5. Intraspecific competition
6. Dispersal, dormancy and metapopulations
7. Ecological applications at the level of organisms and single-species populations
Part 2. SPECIES INTERACTIONS
8. Interspecific competition
9. The nature of predation
10. The population dynamics of predation
11. Decomposers and detritivores
12. Parasitism and disease
13. Symbiosis and mutualism
14. Abundance
15. Ecological applications at the level of population interactions
Part 3. COMMUNITIES AND ECOSYSTEMS
16. The nature of the community
17.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When members of the same species compete for the same resources, it is called what?
A. interspecies competition
B. natural selection
C. intraspecific competition
D. extinction
Answer:
|
|
sciq-3484
|
multiple_choice
|
What is the term for a particle from outside the cell engulfing the cells membrane?
|
[
"endocytosis",
"mitosis",
"metastasis",
"endometriosis"
] |
A
|
Relavent Documents:
Document 0:::
Ectoplasm (also exoplasm) (from the ancient Greek words ἐκτός (èktòs): outside and πλάσμα: plasma, literally meaning: that which has form) is the non-granulated outer part of a cell's cytoplasm, while endoplasm is its often granulated inner layer. It is clear, and protects as well as transports things within the cell. Moreover, large numbers of actin filaments frequently occur in the ectoplasm, which form an elastic support for the cell membrane.
It contains actin and myosin microfilaments.
See also
Cytoplasm
Endoplasm
Document 1:::
Endoplasm generally refers to the inner (often granulated), dense part of a cell's cytoplasm. This is opposed to the ectoplasm which is the outer (non-granulated) layer of the cytoplasm, which is typically watery and immediately adjacent to the plasma membrane. The nucleus is separated from the endoplasm by the nuclear envelope. The different makeups/viscosities of the endoplasm and ectoplasm contribute to the amoeba's locomotion through the formation of a pseudopod. However, other types of cells have cytoplasm divided into endo- and ectoplasm. The endoplasm, along with its granules, contains water, nucleic acids, amino acids, carbohydrates, inorganic ions, lipids, enzymes, and other molecular compounds. It is the site of most cellular processes as it houses the organelles that make up the endomembrane system, as well as those that stand alone. The endoplasm is necessary for most metabolic activities, including cell division.
The endoplasm, like the cytoplasm, is far from static. It is in a constant state of flux through intracellular transport, as vesicles are shuttled between organelles and to/from the plasma membrane. Materials are regularly both degraded and synthesized within the endoplasm based on the needs of the cell and/or organism. Some components of the cytoskeleton run throughout the endoplasm though most are concentrated in the ectoplasm - towards the cells edges, closer to the plasma membrane. The endoplasm's granules are suspended in cytosol.
Granules
The term granule refers to a small particle within the endoplasm, typically the secretory vesicles. The granule is the defining characteristic of the endoplasm, as they are typically not present within the ectoplasm. These offshoots of the endomembrane system are enclosed by a phospholipid bilayer and can fuse with other organelles as well as the plasma membrane. Their membrane is only semipermeable and allows them to house substances that could be harmful to the cell if they were allowed to flow fre
Document 2:::
Trans-endocytosis is the biological process where material created in one cell undergoes endocytosis (enters) into another cell. If the material is large enough, this can be observed using an electron microscope. Trans-endocytosis from neurons to glia has been observed using time-lapse microscopy.
Trans-endocytosis also applies to molecules. For example, this process is involved when a part of the protein Notch is cleaved off and undergoes endocytosis into its neighboring cell. Without Notch trans-endocytosis, there would be too many neurons in a developing embryo. Trans-endocytosis is also involved in cell movement when the protein ephrin is bound by its receptor from a neighboring cell.
Document 3:::
-Cytosis is a suffix that either refers to certain aspects of cells ie cellular process or phenomenon or sometimes refers to predominance of certain type of cells. It essentially means "of the cell". Sometimes it may be shortened to -osis (necrosis, apoptosis) and may be related to some of the processes ending with -esis (eg diapedesis, or emperipolesis, cytokinesis) or similar suffixes.
There are three main types of cytosis: endocytosis (into the cell), exocytosis (out of the cell), and transcytosis (through the cell, in and out).
Etymology and pronunciation
The word cytosis () uses combining forms of cyto- and -osis, reflecting a cellular process. The term was coined by Novikoff in 1961.
Processes related to subcellular entry or exit
Endocytosis
Endocytosis is when a cell absorbs a molecule, such as a protein, from outside the cell by engulfing it with the cell membrane. It is used by most cells, because many critical substances are large polar molecules that cannot pass through the cell membrane. The two major types of endocytosis are pinocytosis and phagocytosis.
Pinocytosis
Pinocytosis, also known as cell drinking, is the absorption of small aqueous particles along with the membrane receptors that recognize them. It is an example of fluid phase endocytosis and is usually a continuous process within the cell. The particles are absorbed through the use of clathrin-coated pits. These clathrin-coated pits are short lived and serve only to form a vesicle for transfer of particles to the lysosome. The clathrin-coated pit invaginates into the cytosol and forms a clathrin-coated vesicle. The clathrin proteins will then dissociate. What is left is known as an early endosome. The early endosome merges with a late endosome. This is the vesicle that allows the particles that were endocytosed to be transported into the lysosome. Here there are hydrolytic enzymes that will degrade the contents of the late endosome. Sometimes, rather than being degraded, the receptors t
Document 4:::
Extracellular space refers to the part of a multicellular organism outside the cells, usually taken to be outside the plasma membranes, and occupied by fluid. This is distinguished from intracellular space, which is inside the cells.
The composition of the extracellular space includes metabolites, ions, proteins, and many other substances that might affect cellular function. For example, neurotransmitters "jump" from cell to cell to facilitate the transmission of an electric current in the nervous system. Hormones also act by travelling the extracellular space towards cell receptors.
In cell biology, molecular biology and related fields, the word extracellular (or sometimes extracellular space) means "outside the cell". This space is usually taken to be outside the plasma membranes, and occupied by fluid (see extracellular matrix). The term is used in contrast to intracellular (inside the cell).
According to the Gene Ontology, the extracellular space is a cellular component defined as: "That part of a multicellular organism outside the cells proper, usually taken to be outside the plasma membranes, and occupied by fluid. For multicellular organisms, the extracellular space refers to everything outside a cell, but still within the organism (excluding the extracellular matrix). Gene products from a multi-cellular organism that are secreted from a cell into the interstitial fluid or blood can therefore be annotated to this term".
The composition of the extracellular space includes metabolites, ions, various proteins and non-protein substances (e.g. DNA, RNA, lipids, microbial products etc.), and particles such as extracellular vesicles that might affect cellular function. For example, hormones, growth factors, cytokines and chemokines act by travelling the extracellular space towards biochemical receptors on cells. Other proteins that are active outside the cell are various enzymes, including digestive enzymes (Trypsin, Pepsin), extracellular proteinases (Matrix me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for a particle from outside the cell engulfing the cells membrane?
A. endocytosis
B. mitosis
C. metastasis
D. endometriosis
Answer:
|
|
ai2_arc-738
|
multiple_choice
|
Students are performing an investigation to determine the types of bacteria that grow inside their school. Which activity should the students avoid while performing this investigation?
|
[
"wearing gloves while handling the samples",
"cleaning all materials they have finished using",
"bringing food and drinks into the laboratory",
"washing hands before leaving the laboratory"
] |
C
|
Relavent Documents:
Document 0:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 1:::
One use of the concept of biocontainment is related to laboratory biosafety and pertains to microbiology laboratories in which the physical containment of pathogenic organisms or agents (bacteria, viruses, and toxins) is required, usually by isolation in environmentally and biologically secure cabinets or rooms, to prevent accidental infection of workers or release into the surrounding community during scientific research.
Another use of the term relates to facilities for the study of agricultural pathogens, where it is used similarly to the term "biosafety", relating to safety practices and procedures used to prevent unintended infection of plants or animals or the release of high-consequence pathogenic agents into the environment (air, soil, or water).
Terminology
The World Health Organization's 2006 publication, Biorisk management: Laboratory biosecurity guidance, defines laboratory biosafety as "the containment principles, technologies and practices that are implemented to prevent the unintentional exposure to pathogens and toxins, or their accidental release". It defines biorisk management as "the analysis of ways and development of strategies to minimize the likelihood of the occurrence of biorisks".
The term "biocontainment" is related to laboratory biosafety. Merriam-Webster's online dictionary reports the first use of the term in 1966, defined as "the containment of extremely pathogenic organisms (such as viruses) usually by isolation in secure facilities to prevent their accidental release especially during research".
The term laboratory biosafety refers to the measures taken "to reduce the risk of accidental release of or exposure to infectious disease agents", whereas laboratory biosecurity is usually taken to mean "a set of systems and practices employed in legitimate bioscience facilities to reduce the risk that dangerous biological agents will be stolen and used maliciously".
Containment types
Laboratory context
Primary containment is the first
Document 2:::
A biosafety cabinet (BSC)—also called a biological safety cabinet or microbiological safety cabinet—is an enclosed, ventilated laboratory workspace for safely working with materials contaminated with (or potentially contaminated with) pathogens requiring a defined biosafety level. Several different types of BSC exist, differentiated by the degree of biocontainment they provide. BSCs first became commercially available in 1950.
Purposes
The primary purpose of a BSC is to serve as a means to protect the laboratory worker and the surrounding environment from pathogens. All exhaust air is HEPA-filtered as it exits the biosafety cabinet, removing harmful bacteria and viruses. This is in contrast to a laminar flow clean bench, which blows unfiltered exhaust air towards the user and is not safe for work with pathogenic agents. Neither are most BSCs safe for use as fume hoods. Likewise, a fume hood fails to provide the environmental protection that HEPA filtration in a BSC would provide. However, most classes of BSCs have a secondary purpose to maintain the sterility of materials inside (the "product").
Classes
The U.S. Centers for Disease Control and Prevention (CDC) classifies BSCs into three classes. These classes and the types of BSCs within them are distinguished in two ways: the level of personnel and environmental protection provided and the level of product protection provided.
Class I
Class I cabinets provide personnel and environmental protection but no product protection. In fact, the inward flow of air can contribute to contamination of samples. Inward airflow is maintained at a minimum velocity of 75 ft/min(0.38 m/s). These BSCs are commonly used to enclose specific equipment (e.g. centrifuges) or procedures (e.g. aerating cultures) that potentially generate aerosols. BSCs of this class are either ducted (connected to the building exhaust system) or unducted (recirculating filtered exhaust back into the laboratory).
Class II
Class II cabinets provide bot
Document 3:::
Aseptic sampling is the process of aseptically withdrawing materials used in biopharmaceutical processes for analysis so as not contaminate or alter the sample or the source of the sample. Aseptic samples are drawn throughout the entire biopharmaceutical process (cell culture/fermentation, buffer & media prep, purification, final fill and finish). Analysis of the sample includes sterility, cell count/cell viability, metabolites, gases, osmolality and more.
Aseptic sampling techniques
Biopharmaceutical drug manufacturers widely use aseptic sampling devices to enhance aseptic technique. The latest innovations of sampling devices harmonize with emerging trends in disposability, enhance operating efficiencies and improve operator safety.
Turn-key aseptic sampling devices
Turn-key Aseptic Sampling Devices are ready-to-use sampling devices that require little or no equipment preparation by the users. Turn-key devices help managers reduce labor costs, estimated to represent 75% to 80% of the cost of running a biotech facility.
Turn-key aseptic sampling devices include:
A means to connect the device to the bioprocess equipment
A mechanism to aseptically access the materials held in the biopress equipment
A means to aseptically transfer the sample out of the bioprocess equipment
A vessel or container to aseptically collect the sample
A mechanism to aseptically disconnect the collection vessel
To protect the integrity of the sample and to ensure it is truly representative of the time the sample is taken, the sampling pathway should be fully contained and independent of other sampling pathways.
Cannula(needle) based aseptic sampling devices
In a cannula-based aseptic sampling system, a needle penetrates an elastomeric septum. The septum is in direct contact with the liquid so that the liquid flows out of the equipment through the needle. Iterations of this technique are used in medical device industries but don't usually include equipment combining the needle an
Document 4:::
A microbiologist (from Greek ) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.
For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also not include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual proper
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Students are performing an investigation to determine the types of bacteria that grow inside their school. Which activity should the students avoid while performing this investigation?
A. wearing gloves while handling the samples
B. cleaning all materials they have finished using
C. bringing food and drinks into the laboratory
D. washing hands before leaving the laboratory
Answer:
|
|
sciq-4560
|
multiple_choice
|
The structure of the boeing 787 has been described as essentially one giant macromolecule, where everything is fastened through cross-linked chemical bonds reinforced with this?
|
[
"carbon fiber",
"nitrogen fiber",
"non-covalent interaction",
"metal-metal bonds"
] |
A
|
Relavent Documents:
Document 0:::
A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to:
VSEPR theory, a model of molecular geometry.
Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs.
Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals.
Crystal field theory, an electrostatic model for transition metal complexes.
Ligand field theory, the application of molecular orbital theory to transition metal complexes.
Chemical bonding
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
In molecular biology, a scissile bond is a covalent chemical bond that can be broken by an enzyme. Examples would be the cleaved bond in the self-cleaving hammerhead ribozyme or the peptide bond of a substrate cleaved by a peptidase.
Document 3:::
A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The structure of the boeing 787 has been described as essentially one giant macromolecule, where everything is fastened through cross-linked chemical bonds reinforced with this?
A. carbon fiber
B. nitrogen fiber
C. non-covalent interaction
D. metal-metal bonds
Answer:
|
|
sciq-7597
|
multiple_choice
|
What do atoms form by sharing valence electrons?
|
[
"phenotype bonds",
"ionic bonds",
"covalent bonds",
"neutron bonds"
] |
C
|
Relavent Documents:
Document 0:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 1:::
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
Document 2:::
A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th
Document 3:::
Stannide ions,
Some examples of stannide Zintl ions are listed below. Some of them contain 2-centre 2-electron bonds (2c-2e), others are "electron deficient" and bonding sometimes can be described using polyhedral skeletal electron pair theory (Wade's rules) where the number of valence electrons contributed by each tin atom is considered to be 2 (the s electrons do not contribute). There are some examples of silicide and plumbide ions with similar structures, for example tetrahedral , the chain anion (Si2−)n, and .
Sn4− found for example in Mg2Sn.
, tetrahedral with 2c-2e bonds e.g. in CsSn.
, tetrahedral closo-cluster with 10 electrons (2n + 2).
(Sn2−)n zig-zag chain polymeric anion with 2c-2e bonds found for example in BaSn.
closo-
Document 4:::
The cubical atom was an early atomic model in which electrons were positioned at the eight corners of a cube in a non-polar atom or molecule. This theory was developed in 1902 by Gilbert N. Lewis and published in 1916 in the article "The Atom and the Molecule" and used to account for the phenomenon of valency.
Lewis' theory was based on Abegg's rule. It was further developed in 1919 by Irving Langmuir as the cubical octet atom. The figure below shows structural representations for elements of the second row of the periodic table.
Although the cubical model of the atom was soon abandoned in favor of the quantum mechanical model based on the Schrödinger equation, and is therefore now principally of historical interest, it represented an important step towards the understanding of the chemical bond. The 1916 article by Lewis also introduced the concept of the electron pair in the covalent bond, the octet rule, and the now-called Lewis structure.
Bonding in the cubical atom model
Single covalent bonds are formed when two atoms share an edge, as in structure C below. This results in the sharing of two electrons. Ionic bonds are formed by the transfer of an electron from one cube to another without sharing an edge (structure A). An intermediate state where only one corner is shared (structure B) was also postulated by Lewis.
Double bonds are formed by sharing a face between two cubic atoms. This results in sharing four electrons:
Triple bonds could not be accounted for by the cubical atom model, because there is no way of having two cubes share three parallel edges. Lewis suggested that the electron pairs in atomic bonds have a special attraction, which result in a tetrahedral structure, as in the figure below (the new location of the electrons is represented by the dotted circles in the middle of the thick edges). This allows the formation of a single bond by sharing a corner, a double bond by sharing an edge, and a triple bond by sharing a face. It also accounts
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do atoms form by sharing valence electrons?
A. phenotype bonds
B. ionic bonds
C. covalent bonds
D. neutron bonds
Answer:
|
|
sciq-5733
|
multiple_choice
|
Sucrose does not undergo reactions that are typical of aldehydes and ketones, therefore it is a nonreducing what?
|
[
"juice",
"wheat",
"sugar",
"salt"
] |
C
|
Relavent Documents:
Document 0:::
A reducing sugar is any sugar that is capable of acting as a reducing agent. In an alkaline solution, a reducing sugar forms some aldehyde or ketone, which allows it to act as a reducing agent, for example in Benedict's reagent. In such a reaction, the sugar becomes a carboxylic acid.
All monosaccharides are reducing sugars, along with some disaccharides, some oligosaccharides, and some polysaccharides. The monosaccharides can be divided into two groups: the aldoses, which have an aldehyde group, and the ketoses, which have a ketone group. Ketoses must first tautomerize to aldoses before they can act as reducing sugars. The common dietary monosaccharides galactose, glucose and fructose are all reducing sugars.
Disaccharides are formed from two monosaccharides and can be classified as either reducing or nonreducing. Nonreducing disaccharides like sucrose and trehalose have glycosidic bonds between their anomeric carbons and thus cannot convert to an open-chain form with an aldehyde group; they are stuck in the cyclic form. Reducing disaccharides like lactose and maltose have only one of their two anomeric carbons involved in the glycosidic bond, while the other is free and can convert to an open-chain form with an aldehyde group.
The aldehyde functional group allows the sugar to act as a reducing agent, for example, in the Tollens' test or Benedict's test. The cyclic hemiacetal forms of aldoses can open to reveal an aldehyde, and certain ketoses can undergo tautomerization to become aldoses. However, acetals, including those found in polysaccharide linkages, cannot easily become free aldehydes.
Reducing sugars react with amino acids in the Maillard reaction, a series of reactions that occurs while cooking food at high temperatures and that is important in determining the flavor of food. Also, the levels of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products.
Terminology
Oxidation-reduction
A reducing sugar is on
Document 1:::
Sucrose octapropionate is a chemical compound with formula or , an eight-fold ester of sucrose and propionic acid. Its molecule can be described as that of sucrose with its eight hydroxyl groups – replaced by propionate groups –. It is a crystalline colorless solid. It is also called sucrose octapropanoate or octapropionyl sucrose.
History
The preparation of sucrose octapropionate was first described in 1933 by Gerald J. Cox and others.
Preparation
The compound can be prepared by the reaction of sucrose with propionic anhydride in the melt state or at room temperature, over several days, in anhydrous pyridine.
Properties
Sucrose octapropionate is only slightly soluble in water (less than 0.1 g/L) but is soluble in many common organic solvents such as isopropanol and ethanol, from which it can be crystallized by evaporation of the solvent.
The crystalline form melts at 45.4–45.5 °C into a viscous liquid (47.8 poises at 48.9 °C), that becomes a clear glassy solid on cooling, but easily recrystallizes.
The density of the glassy form is 1.185 kg/L (at 20 °C). It is an optically active compound with [α]20D +53°.
The compound can be vacuum distilled at 280–290 °C and 0.05 to 0.07 torr.
Applications
Distillation of fully esterified propionates has been proposed as a method for the separation and identification of sugars.
While the crystallinity of the pure compound prevents its use as a plasticizer it was found that incompletely esterified variants (with 1 to 2 remaining hydroxyls per molecule) will not crystallize, and therefore can be considered for that application.
See also
Sucrose octaacetate
Document 2:::
Sucroglycerides are substances used in the manufacture of food. They are known in the E number scheme as E474.
Synopsis
Sucroglycerides have been known at least since 1963.
Sucroglycerides are obtained through a reaction between sucrose and an edible oil or fat, and consist of a mixture of mono- and di-esters of sucrose and fatty acids, and mono- and diglycerides. They are immiscible with water, so some solvents may be necessary to produce them. These are limited to dimethyl formamide, cyclohexane, isobutanol, isopropanol and ethyl acetate.
Sucroglycerides are employed as an emulsifier, stabiliser and thickener, and may be used in dairy based drinks, such as chocolate milk, eggnog, drinking yoghurt, beverage whiteners, or in dairy based desserts such as ice cream, yoghurt, sorbets, fruit based desserts, cocoa mixes, chewing gum, rice pudding or tapioca pudding. Processed meat, egg based desserts like custard, soups and broths, sauces also may be treated with sucroglycerides.
Goops and supplements for "weight reduction", infants or youth, "sport" or "electrolyte" drinks, and particulated drinks like cider, fruit wine, mead or spirituous drinks may also be treated with sucroglycerides.
Document 3:::
Sucrase is a digestive enzyme that catalyzes the hydrolysis of sucrose to its subunits fructose and glucose. One form, sucrase-isomaltase, is secreted in the small intestine on the brush border. The sucrase enzyme invertase, which occurs more commonly in plants, also hydrolyzes sucrose but by a different mechanism.
Types
is isomaltase
is invertase
is sucrose alpha-glucosidase
Physiology
Sucrose intolerance (also known as congenital sucrase-isomaltase deficiency (CSID), genetic sucrase-isomaltase deficiency (GSID), or sucrase-isomaltase deficiency) occurs when sucrase is not being secreted in the small intestine. With sucrose intolerance, the result of consuming sucrose is excess gas production and often diarrhea and malabsorption. Lactose intolerance is a related disorder that reflects an individual's inability to hydrolyze the disaccharide lactose.
Sucrase is secreted by the tips of the villi of the epithelium in the small intestine. Its levels are reduced in response to villi-blunting events such as celiac sprue and the inflammation associated with the disorder. The levels increase in pregnancy, lactation, and diabetes as the villi hypertrophy.
Use in chemical analysis
Sucrose is a non-reducing sugar, so will not test positive with Benedict's solution. To test for sucrose, the sample is treated with sucrase. The sucrose is hydrolysed into glucose and fructose, with glucose being a reducing sugar, which in turn tests positive with Benedict's solution..
In other species
Cedar waxwings (Bombycilla cedrorum) and American robins (Turdus migratorius) have evolved to lose this enzyme due to their insectivorous and frugivorous diets. This absence produces digestive difficulty if challenged with unusual amounts of the sugar.
Document 4:::
Added sugars or free sugars are sugar carbohydrates (caloric sweeteners) added to food and beverages at some point before their consumption. These include added carbohydrates (monosaccharides and disaccharides), and more broadly, sugars naturally present in honey, syrup, fruit juices and fruit juice concentrates. They can take multiple chemical forms, including sucrose (table sugar), glucose (dextrose), and fructose.
Medical consensus holds that added sugars contribute little nutritional value to food, leading to a colloquial description as "empty calories". Overconsumption of sugar is correlated with excessive calorie intake and increased risk of weight gain and various diseases.
Uses
United States
In the United States, added sugars may include sucrose or high-fructose corn syrup, both primarily composed of about half glucose and half fructose. Other types of added sugar ingredients include beet and cane sugars, malt syrup, maple syrup, pancake syrup, fructose sweetener, liquid fructose, fruit juice concentrate, honey, and molasses. The most common types of foods containing added sugars are sweetened beverages, including most soft drinks, and also desserts and sweet snacks, which represent 20% of daily calorie consumption, twice the recommendation of the World Health Organization (WHO). Based on a 2012 study on the use of caloric and noncaloric sweeteners in some 85,000 food and beverage products, 74% of the products contained added sugar.
Sweetened beverages
Sweetened beverages contain a syrup mixture of the monosaccharides glucose and fructose formed by hydrolytic saccharification of the disaccharide sucrose. The bioavailability of liquid carbohydrates is higher than in solid sugars, as characterized by sugar type and by the estimated rate of digestion. There is evidence for a positive and causal relationship between excessive intake of fruit juices and increased risk of some chronic metabolic diseases.
Guidelines
World Health Organization
In 2003, the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Sucrose does not undergo reactions that are typical of aldehydes and ketones, therefore it is a nonreducing what?
A. juice
B. wheat
C. sugar
D. salt
Answer:
|
|
sciq-2001
|
multiple_choice
|
What term is used to describe the amount of space occupied by a sample of matter?
|
[
"mass",
"liquid",
"growth",
"volume"
] |
D
|
Relavent Documents:
Document 0:::
The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V, SA/V, or sa/vol) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface the volume. Good examples for such processes are processes governed by the heat equation, that is, diffusion and heat transfer by thermal conduction. SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, water loss by animals, bacterial morphogenesis, organism's thermoregulation, design of artificial bone tissue, artificial lungs and many more biological and biotechnological structures. For more examples see Glazier.
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", and elsewhere.
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball, a consequence of the isoperimetric inequality in 3 dimensions. By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
For solid spheres
A solid sphere or ball is a three-dimensional object, being the solid figure bounded by a sphere. (In geometry, the term sphere properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using
Document 1:::
In chemistry and related fields, the molar volume, symbol Vm, or of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar mass (M) divided by the mass density (ρ):
The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids.
Definition
The molar volume of a substance i is defined as its molar mass divided by its density ρi0:
For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density:
There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property.
Relation to specific volume
Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance:
Ideal gases
For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure.
The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas:
Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: R = , or about .
The molar volume of an ideal gas at 100 kPa (1 bar) is
at 0 °C,
at 25 °C.
The molar volume of an ideal gas at 1 atmosphere of pressure is
at 0 °C,
at 25 °C.
Crystalline solids
For crystalline solids, the molar volume can be measured by X-ray crystallography.
The unit cell
Document 2:::
Swelling index may refer to the following material parameters that quantify volume change:
Crucible swelling index, also known as free swelling index, in coal assay
Swelling capacity, the amount of a liquid that can be absorbed by a polymer
Shrink–swell capacity in soil mechanics
Unload-reload constant (κ) in critical state soil mechanics
Mechanics
Materials science
Document 3:::
Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen.
vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas .
vapour density = molar mass of gas / molar mass of H2
vapour density = molar mass of gas / 2.016
vapour density = × molar mass
(and thus: molar mass = ~2 × vapour density)
For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity.
Alternative definition
In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2.
With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space.
See also
Relative density (also known as specific gravity)
Victor Meyer apparatus
Document 4:::
In physics and mechanics, mass distribution is the spatial distribution of mass within a solid body. In principle, it is relevant also for gases or liquids, but on Earth their mass distribution is almost homogeneous.
Astronomy
In astronomy mass distribution has decisive influence on the development e.g. of nebulae, stars and planets.
The mass distribution of a solid defines its center of gravity and influences its dynamical behaviour - e.g. the oscillations and eventual rotation.
Mathematical modelling
A mass distribution can be modeled as a measure. This allows point masses, line masses, surface masses, as well as masses given by a volume density function. Alternatively the latter can be generalized to a distribution. For example, a point mass is represented by a delta function defined in 3-dimensional space. A surface mass on a surface given by the equation may be represented by a density distribution , where is the mass per unit area.
The mathematical modelling can be done by potential theory, by numerical methods (e.g. a great number of mass points), or by theoretical equilibrium figures.
Geology
In geology the aspects of rock density are involved.
Rotating solids
Rotating solids are affected considerably by the mass distribution, either if they are homogeneous or inhomogeneous - see Torque, moment of inertia, wobble, imbalance and stability.
See also
Bouguer plate
Gravity
Mass function
Mass concentration (astronomy)
External links
Mass distribution of the Earth
Mechanics
Celestial mechanics
Geophysics
Mass
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term is used to describe the amount of space occupied by a sample of matter?
A. mass
B. liquid
C. growth
D. volume
Answer:
|
|
sciq-1466
|
multiple_choice
|
What occurs where the water motion slows?
|
[
"erosion",
"diffusion",
"deposition",
"vapor"
] |
C
|
Relavent Documents:
Document 0:::
Shear velocity, also called friction velocity, is a form by which a shear stress may be re-written in units of velocity. It is useful as a method in fluid mechanics to compare true velocities, such as the velocity of a flow in a stream, to a velocity that relates shear between layers of flow.
Shear velocity is used to describe shear-related motion in moving fluids. It is used to describe:
Diffusion and dispersion of particles, tracers, and contaminants in fluid flows
The velocity profile near the boundary of a flow (see Law of the wall)
Transport of sediment in a channel
Shear velocity also helps in thinking about the rate of shear and dispersion in a flow. Shear velocity scales well to rates of dispersion and bedload sediment transport. A general rule is that the shear velocity is between 5% and 10% of the mean flow velocity.
For river base case, the shear velocity can be calculated by Manning's equation.
n is the Gauckler–Manning coefficient. Units for values of n are often left off, however it is not dimensionless, having units of: (T/[L1/3]; s/[ft1/3]; s/[m1/3]).
Rh is the hydraulic radius (L; ft, m);
the role of a is a dimension correction factor. Thus a= 1 m1/3/s = 1.49 ft1/3/s.
Instead of finding and for the specific river of interest, the range of possible values can be examined; for most rivers, is between 5% and 10% of :
For general case
where τ is the shear stress in an arbitrary layer of fluid and ρ is the density of the fluid.
Typically, for sediment transport applications, the shear velocity is evaluated at the lower boundary of an open channel:
where τb is the shear stress given at the boundary.
Shear velocity is linked to the Darcy friction factor by equating wall shear stress, giving:
where is the friction factor.
Shear velocity can also be defined in terms of the local velocity and shear stress fields (as opposed to whole-channel values, as given above).
Friction velocity in turbulence
The friction velocity is often used as a
Document 1:::
Rheometry () generically refers to the experimental techniques used to determine the rheological properties of materials, that is the qualitative and quantitative relationships between stresses and strains and their derivatives. The techniques used are experimental. Rheometry investigates materials in relatively simple flows like steady shear flow, small amplitude oscillatory shear, and extensional flow.
The choice of the adequate experimental technique depends on the rheological property which has to be determined. This can be the steady shear viscosity, the linear viscoelastic properties (complex viscosity respectively elastic modulus), the elongational properties, etc.
For all real materials, the measured property will be a function of the flow conditions during which it is being measured (shear rate, frequency, etc.) even if for some materials this dependence is vanishingly low under given conditions (see Newtonian fluids).
Rheometry is a specific concern for smart fluids such as electrorheological fluids and magnetorheological fluids, as it is the primary method to quantify the useful properties of these materials.
Rheometry is considered useful in the fields of quality control, process control, and industrial process modelling, among others. For some, the techniques, particularly the qualitative rheological trends, can yield the classification of materials based on the main interactions between different possible elementary components and how they qualitatively affect the rheological behavior of the materials. Novel applications of these concepts include measuring cell mechanics in thin layers, especially in drug screening contexts.
Of non-Newtonian fluids
The viscosity of a non-Newtonian fluid is defined by a power law:
where η is the viscosity after shear is applied, η0 is the initial viscosity, γ is the shear rate, and if
, the fluid is shear thinning,
, the fluid is shear thickening,
, the fluid is Newtonian.
In rheometry, shear forces are applied t
Document 2:::
Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer.
Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law:
where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α:
Transport phenomena
Document 3:::
In soil science, Horton overland flow describes the tendency of water to flow horizontally across land surfaces when rainfall has exceeded infiltration capacity and depression storage capacity. It is named after Robert E. Horton, the engineer who made the first detailed studies of the phenomenon.
Paved surfaces such as asphalt, which are designed to be flat and impermeable, rapidly achieve Horton overland flow. It is shallow, sheetlike, and fast-moving, and hence capable of extensively eroding soil and bedrock.
Horton overland flow is most commonly encountered in urban construction sites and unpaved rural roads, where vegetation has been stripped away, exposing bare dirt. The process also poses a significant problem in areas with steep terrain, where water can build up great speed and where soil is less stable, and in farmlands, where soil is flat and loose.
See also
Horton's equation
Urban runoff
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs where the water motion slows?
A. erosion
B. diffusion
C. deposition
D. vapor
Answer:
|
|
sciq-3052
|
multiple_choice
|
What is a key factor in the growth of populations?
|
[
"assimilation",
"immigration",
"legislation",
"gentrification"
] |
B
|
Relavent Documents:
Document 0:::
Diversity Explosion: How New Racial Demographics are Remaking America is a 2014 non-fiction book by William H. Frey.
A look into how racial and ethnic diversity and changing demographics are altering the United States, Diversity Explosion is published and distributed by the Brookings Institution Press.
Frey is a senior fellow at the Brookings Institution Metropolitan Policy Program.
Document 1:::
Observations Concerning the Increase of Mankind, Peopling of Countries, etc. is a short essay written in 1751 by American polymath Benjamin Franklin. It was circulated by Franklin in manuscript to his circle of friends, but in 1755 it was published as an addendum in a Boston pamphlet on another subject. It was reissued ten times during the next 15 years.
The essay examines population growth and its limits. Writing as, at the time, a loyal subject of the British Crown, Franklin argues that the British should increase their population and power by expanding across the Americas, taking the view that Europe is too crowded.
Content
Franklin projected an exponential growth (doubling every 25 years) in the population of the Thirteen Colonies, so that in a century "the greatest Number of Englishmen will be on this Side of the Water", thereby increasing the power of England. As Englishmen they would share language, manners, and religion with their countrymen in England, thus extending English civilization and English rule substantially".
Franklin viewed the land in America as underutilized and available for the expansion of farming. This enabled the population to establish households at an earlier age and support larger families than was possible in Europe. The limit to expansion, reached in Europe but not America, is reached when the "crowding and interfering with each other's means of subsistence", an idea that would inspire Malthus.
Historian Walter Isaacson writes that Franklin's theory was empirically based on the population data during his day. Franklin's reasoning was essentially correct in that America's population continued to double every twenty years, surpassing England's population in the 1850s, and continued until the frontier era ended in the early 1900s. According to the United States Census, from 1750 to 1900, the population of colonial and continental America overall doubled every twenty five years, correctly aligning with Franklin's prediction.
Protect
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
Economic restructuring is used to indicate changes in the constituent parts of an economy in a very general sense. In the western world, it is usually used to refer to the phenomenon of urban areas shifting from a manufacturing to a service sector economic base. It has profound implications for productive capacities and competitiveness of cities and regions. This transformation has affected demographics including income distribution, employment, and social hierarchy; institutional arrangements including the growth of the corporate complex, specialized producer services, capital mobility, informal economy, nonstandard work, and public outlays; as well as geographic spacing including the rise of world cities, spatial mismatch, and metropolitan growth differentials.
Demographic impact
As cities experience a loss of manufacturing jobs and growth of services, sociologist Saskia Sassen affirms that a widening of the social hierarchy occurs where high-level, high-income, salaried professional jobs expands in the service industries alongside a greater incidence of low-wage, low-skilled jobs, usually filled by immigrants and minorities. A "missing middle" eventually develops in the wage structure. Several effects of this social polarization include the increasing concentration of poverty in large U.S. cities, the increasing concentration of black and Hispanic populations in large U.S. cities, and distinct social forms such as the underclass, informal economy, and entrepreneurial immigrant communities. In addition, the declining manufacturing sector leaves behind strained blue-collared workers who endure chronic unemployment, economic insecurity, and stagnation due to the global economy's capital flight. Wages and unionization rates for manufacturing jobs also decline. One other qualitative dimension involves the feminization of the job supply as more and more women enter the labor force usually in the service sector.
Both costs and benefits are associated with economic re
Document 4:::
The book An Essay on the Principle of Population was first published anonymously in 1798, but the author was soon identified as Thomas Robert Malthus. The book warned of future difficulties, on an interpretation of the population increasing in geometric progression (so as to double every 25 years) while food production increased in an arithmetic progression, which would leave a difference resulting in the want of food and famine, unless birth rates decreased.
While it was not the first book on population, Malthus's book fuelled debate about the size of the population in Britain and contributed to the passing of the Census Act 1800. This Act enabled the holding of a national census in England, Wales and Scotland, starting in 1801 and continuing every ten years to the present. The book's 6th edition (1826) was independently cited as a key influence by both Charles Darwin and Alfred Russel Wallace in developing the theory of natural selection.
A key portion of the book was dedicated to what is now known as the Malthusian Law of Population. The theory claims that growing population rates contribute to a rising supply of labour and inevitably lowers wages. In essence, Malthus feared that continued population growth lends itself to poverty.
In 1803, Malthus published, under the same title, a heavily revised second edition of his work. His final version, the 6th edition, was published in 1826. In 1830, 32 years after the first edition, Malthus published a condensed version entitled A Summary View on the Principle of Population, which included responses to criticisms of the larger work.
Overview
Between 1798 and 1826 Malthus published six editions of his famous treatise, updating each edition to incorporate new material, to address criticism, and to convey changes in his own perspectives on the subject. He wrote the original text in reaction to the optimism of his father and his father's associates (notably Rousseau) regarding the future improvement of society. Malthu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a key factor in the growth of populations?
A. assimilation
B. immigration
C. legislation
D. gentrification
Answer:
|
|
sciq-7622
|
multiple_choice
|
For the nervous system to function, neurons must be able to send and receive what?
|
[
"signals",
"pulses",
"information",
"proteins"
] |
A
|
Relavent Documents:
Document 0:::
The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.
The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system.
Stimulus
Organisms need information to solve at least three kinds of problems: (a) to maintain an appropriate environment, i.e., homeostasis; (b) to time activities (e.g., seasonal changes in behavior) or synchronize activities with those of conspecifics; and (c) to locate and respond to resources or threats (e.g., by moving towards resources or evading or attacking threats). Organisms also need to transmit information in order to influence another's behavior: to identify themselves, warn conspecifics of danger, coordinate activities, or deceive.
Sensory systems code for four aspects of a stimulus; type (modality), intensity, location, and duration. Arrival time of a sound pulse and phase differences of continuous sound are used for sound localization. Certain receptors are sensitive to certain types of stimuli (for example, different mechanoreceptors respond best to different kinds of touch stimuli, like sharp or blunt objects). Receptors send impulses in certain patterns to send information about the intensity of a stimul
Document 1:::
There are yet unsolved problems in neuroscience, although some of these problems have evidence supporting a hypothesized solution, and the field is rapidly evolving. One major problem is even enumerating what would belong on a list such as this. However, these problems include:
Consciousness
Consciousness:
How can consciousness be defined?
What is the neural basis of subjective experience, cognition, wakefulness, alertness, arousal, and attention?
Quantum mind: Does quantum mechanical phenomena, such as entanglement and superposition, play an important part in the brain's function and can it explain critical aspects of consciousness?
Is there a "hard problem of consciousness"?
If so, how is it solved?
What, if any, is the function of consciousness?
What is the nature and mechanism behind near-death experiences?
How can death be defined? Can consciousness exist after death?
If consciousness is generated by brain activity, then how do some patients with physically deteriorated brains suddenly gain a brief moment of restored consciousness prior to death, a phenomenon known as terminal lucidity?
Problem of representation: How exactly does the mind function (or how does the brain interpret and represent information about the world)?
Bayesian mind: Does the mind make sense of the world by constantly trying to make predictions according to the rules of Bayesian probability?
Computational theory of mind: Is the mind a symbol manipulation system, operating on a model of computation, similar to a computer?
Connectionism: Can the mind be explained by mathematical models known as artificial neural networks?
Embodied cognition: Is the cognition of an organism affected by the organism's entire body (rather than just simply its brain), including its interactions with the environment?
Extended mind thesis: Does the mind not only exist in the brain, but also functions in the outside world by using physical objects as mental processes? Or just as prosthetic limbs can becom
Document 2:::
A brain–brain interface is a direct communication pathway between the brain of one animal and the brain of another animal.
Brain to brain interfaces have been used to help rats collaborate with each other. When a second rat was unable to choose the correct lever, the first rat noticed (not getting a second reward), and produced a round of task-related neuron firing that made the second rat more likely to choose the correct lever.
In 2013, Rajesh Rao was able to use electrical brain recordings and a form of magnetic stimulation to send a brain signal to Andrea Stocco on the other side of the University of Washington campus. In 2015, researchers linked up multiple brains, of both monkeys and rats, to form an "organic computer".
It is hypothesized that by using brain-to-brain interfaces (BTBIs) a biological computer, or brain-net, could be constructed using animal brains as its computational units. Initial exploratory work demonstrated collaboration between rats in distant cages linked by signals from cortical microelectrode arrays implanted in their brains. The rats were rewarded when actions were performed by the "decoding rat" which conformed to incoming signals and when signals were transmitted by the "encoding rat" which resulted in the desired action. In the initial experiment the rewarded action was pushing a lever in the remote location corresponding to the position of a lever near a lighted LED at the home location. About a month was required for the rats to acclimate themselves to incoming "brainwaves."
Lastly, it is important to stress that the topology of BTBI does not need to be restricted to one encoder and one decoder subjects. Instead, we have already proposed that, in theory, channel accuracy can be increased if instead of a dyad a whole grid of multiple reciprocally interconnected brains are employed. Such a computing structure could define the first example of an organic computer capable of solving heuristic problems that would be deemed non-comp
Document 3:::
Sensory neuroscience is a subfield of neuroscience which explores the anatomy and physiology of neurons that are part of sensory systems such as vision, hearing, and olfaction. Neurons in sensory regions of the brain respond to stimuli by firing one or more nerve impulses (action potentials) following stimulus presentation. How is information about the outside world encoded by the rate, timing, and pattern of action potentials? This so-called neural code is currently poorly understood and sensory neuroscience plays an important role in the attempt to decipher it. Looking at early sensory processing is advantageous since brain regions that are "higher up" (e.g. those involved in memory or emotion) contain neurons which encode more abstract representations. However, the hope is that there are unifying principles which govern how the brain encodes and processes information. Studying sensory systems is an important stepping stone in our understanding of brain function in general.
Typical experiments
A typical experiment in sensory neuroscience involves the presentation of a series of relevant stimuli to an experimental subject while the subject's brain is being monitored. This monitoring can be accomplished by noninvasive means such as functional magnetic resonance imaging (fMRI) or electroencephalography (EEG), or by more invasive means such as electrophysiology, the use of electrodes to record the electrical activity of single neurons or groups of neurons. fMRI measures changes in blood flow which related to the level of neural activity and provides low spatial and temporal resolution, but does provide data from the whole brain. In contrast,
Electrophysiology provides very high temporal resolution (the shapes of single spikes can be resolved) and data can be obtained from single cells. This is important since computations are performed within the dendrites of individual neurons.
Single neuron experiments
In most of the central nervous system, neurons communicate ex
Document 4:::
The following outline is provided as an overview of and topical guide to neuroscience:
Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another.
Branches of neuroscience
Neurophysiology
Neurophysiology is the study of the function (as opposed to structure) of the nervous system.
Brain mapping
Electrophysiology
Extracellular recording
Intracellular recording
Brain stimulation
Electroencephalography
Intermittent rhythmic delta activity
:Category: Neurophysiology
:Category: Neuroendocrinology
:Neuroendocrinology
Neuroanatomy
Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system.
Immunostaining
:Category: Neuroanatomy
Neuropharmacology
Neuropharmacology is the study of how drugs affect cellular function in the nervous system.
Drug
Psychoactive drug
Anaesthetic
Narcotic
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals.
Neuroethology
Developmental neuroscience
Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop.
Aging and memory
Cognitive neuroscience
Cognitive ne
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
For the nervous system to function, neurons must be able to send and receive what?
A. signals
B. pulses
C. information
D. proteins
Answer:
|
|
sciq-7275
|
multiple_choice
|
What is the second stage of cellular respiration?
|
[
"marr cycle",
"krebs cycle",
"duocycle",
"beatnik cycle"
] |
B
|
Relavent Documents:
Document 0:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 1:::
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration.
Each pathway generates different waste products.
Aerobic respiration
When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules.
Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP
Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen.
In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant.
Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle.
The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted.
Anaerobic respiration
Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the second stage of cellular respiration?
A. marr cycle
B. krebs cycle
C. duocycle
D. beatnik cycle
Answer:
|
|
sciq-7744
|
multiple_choice
|
In which way to vertebrates reproduces?
|
[
"biologically",
"sexually",
"anally",
"asexually"
] |
B
|
Relavent Documents:
Document 0:::
The "Vicar of Bray" hypothesis (or Fisher-Muller Model) attempts to explain why sexual reproduction might have advantages over asexual reproduction. Reproduction is the process by which organisms give rise to offspring. Asexual reproduction involves a single parent and results in offspring that are genetically identical to each other and to the parent.
In contrast to asexual reproduction, sexual reproduction involves two parents. Both the parents produce gametes through meiosis, a special type of cell division that reduces the chromosome number by half. During an early stage of meiosis, before the chromosomes are separated in the two daughter cells, the chromosomes undergo genetic recombination. This allows them to exchange some of their genetic information. Therefore, the gametes from a single organism are all genetically different from each other. The process in which the two gametes from the two parents unite is called fertilization. Half of the genetic information from both parents is combined. This results in offspring that are genetically different from each other and from the parents.
In short, sexual reproduction allows a continuous rearrangement of genes. Therefore, the offspring of a population of sexually reproducing individuals will show a more varied selection of phenotypes. Due to faster attainment of favorable genetic combinations, sexually reproducing populations evolve more rapidly in response to environmental changes. Under the Vicar of Bray hypothesis, sex benefits a population as a whole, but not individuals within it, making it a case of group selection.
Disadvantage of sexual reproduction
Sexual reproduction often takes a lot of effort. Finding a mate can sometimes be an expensive, risky and time consuming process. Courtship, copulation and taking care of the new born offspring may also take up a lot of time and energy. From this point of view, asexual reproduction may seem a lot easier and more efficient. But another important thing to co
Document 1:::
Sexual characteristics are physical traits of an organism (typically of a sexually dimorphic organism) which are indicative of or resultant from biological sexual factors. These include both primary sex characteristics, such as gonads, and secondary sex characteristics.
Humans
In humans, sex organs or primary sexual characteristics, which are those a person is born with, can be distinguished from secondary sex characteristics, which develop later in life, usually during puberty. The development of both is controlled by sex hormones produced by the body after the initial fetal stage where the presence or absence of the Y-chromosome and/or the SRY gene determine development.
Male primary sex characteristics are the penis, the scrotum and the ability to ejaculate when matured. Female primary sex characteristics are the vagina, uterus, fallopian tubes, clitoris, cervix, and the ability to give birth and menstruate when matured.
Hormones that express sexual differentiation in humans include:
estrogens
progesterone
androgens such as testosterone
The following table lists the typical sexual characteristics in humans (even though some of these can also appear in other animals as well):
Other organisms
In invertebrates and plants, hermaphrodites (which have both male and female reproductive organs either at the same time or during their life cycle) are common, and in many cases, the norm.
In other varieties of multicellular life (e.g. the fungi division, Basidiomycota) sexual characteristics can be much more complex, and may involve many more than two sexes. For details on the sexual characteristics of fungi, see: Hypha and Plasmogamy.
Secondary sex characteristics in non-human animals include manes of male lions, long tail feathers of male peafowl, the tusks of male narwhals, enlarged proboscises in male elephant seals and proboscis monkeys, the bright facial and rump coloration of male mandrills, and horns in many goats and antelopes.
See also
Mammalian gesta
Document 2:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 3:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
Document 4:::
An equivalence group is a set of unspecified cells that have the same developmental potential or ability to adopt various fates. Our current understanding suggests that equivalence groups are limited to cells of the same ancestry, also known as sibling cells. Often, cells of an equivalence group adopt different fates from one another.
Equivalence groups assume various potential fates in two general, non-mutually exclusive ways. One mechanism, induction, occurs when a signal originating from outside of the equivalence group specifies a subset of the naïve cells. Another mode, known as lateral inhibition, arises when a signal within an equivalence group causes one cell to adopt a dominant fate while others in the group are inhibited from doing so. In many examples of equivalence groups, both induction and lateral inhibition are used to define patterns of distinct cell types.
Cells of an equivalence group that do not receive a signal adopt a default fate. Alternatively, cells that receive a signal take on different fates. At a certain point, the fates of cells within an equivalence group become irreversibly determined, thus they lose their multipotent potential. The following provides examples of equivalence groups studied in nematodes and ascidians.
Vulva Precursor Cell Equivalence Group
Introduction
A classic example of an equivalence group is the vulva precursor cells (VPCs) of nematodes. In Caenorhabditis elegans self-fertilized eggs exit the body through the vulva. This organ develops from a subset of cell of an equivalence group consisting of six VPCs, P3.p-P8.p, which lie ventrally along the anterior-posterior axis. In this example a single overlying somatic cells, the anchor cell, induces nearby VPCs to take on vulva fates 1° (P6.p) and 2° (P5.p and P7.p). VPCs that are not induced form the 3° lineage (P3.p, P4.p and P8.p), which make epidermal cells that fuse to a large syncytial epidermis (see image).
The six VPCs form an equivalence group beca
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In which way to vertebrates reproduces?
A. biologically
B. sexually
C. anally
D. asexually
Answer:
|
|
sciq-4463
|
multiple_choice
|
Many adolescents experience frequent mood swings. name one of the causes for this.
|
[
"psychological changes",
"surging hormones",
"growing hormones",
"maturing nervous system"
] |
B
|
Relavent Documents:
Document 0:::
Scientific studies have found that different brain areas show altered activity in humans with major depressive disorder (MDD), and this has encouraged advocates of various theories that seek to identify a biochemical origin of the disease, as opposed to theories that emphasize psychological or situational causes. Factors spanning these causative groups include nutritional deficiencies in magnesium, vitamin D, and tryptophan with situational origin but biological impact. Several theories concerning the biologically based cause of depression have been suggested over the years, including theories revolving around monoamine neurotransmitters, neuroplasticity, neurogenesis, inflammation and the circadian rhythm. Physical illnesses, including hypothyroidism and mitochondrial disease, can also trigger depressive symptoms.
Neural circuits implicated in depression include those involved in the generation and regulation of emotion, as well as in reward. Abnormalities are commonly found in the lateral prefrontal cortex whose putative function is generally considered to involve regulation of emotion. Regions involved in the generation of emotion and reward such as the amygdala, anterior cingulate cortex (ACC), orbitofrontal cortex (OFC), and striatum are frequently implicated as well. These regions are innervated by a monoaminergic nuclei, and tentative evidence suggests a potential role for abnormal monoaminergic activity.
Genetic factors
Difficulty of gene studies
Historically, candidate gene studies have been a major focus of study. However, as the number of genes reduces the likelihood of choosing a correct candidate gene, Type I errors (false positives) are highly likely. Candidate genes studies frequently possess a number of flaws, including frequent genotyping errors and being statistically underpowered. These effects are compounded by the usual assessment of genes without regard for gene-gene interactions. These limitations are reflected in the fact that no candid
Document 1:::
Affect regulation and "affect regulation theory" are important concepts in psychiatry and psychology and in close relation with emotion regulation. However, the latter is a reflection of an individual's mood status rather than their affect. Affect regulation is the actual performance one can demonstrate in a difficult situation regardless of what their mood or emotions are. It is tightly related to the quality of executive and cognitive functions and that is what distinguishes this concept from emotion regulation. One can have a low emotional control but a high level of control on his or her affect, and therefore, demonstrate a normal interpersonal functioning as a result of intact cognition.
See also
Affect (psychology)
Affect theory
Affective
Affective spectrum
Emotional self-regulation
Document 2:::
Lövheim Cube of Emotion is a theoretical model for the relationship between the monoamine neurotransmitters serotonin, dopamine and noradrenaline and emotions. The model was presented in 2012 by Swedish researcher Hugo Lövheim.
Lövheim classifies emotions according to Silvan Tomkins, and orders the basic emotions in a three-dimensional coordinate system where the level of the monoamine neurotransmitters form orthogonal axes. The model is regarded as a dimensional model of emotion.
The main concepts of the hypothesis are that the monoamine neurotransmitters are orthogonal in essence, and the proposed one-to-one relationship between the monoamine neurotransmitters and emotions.
Document 3:::
Functional Ensemble of Temperament (FET) is a neurochemical model suggesting specific functional roles of main neurotransmitter systems in the regulation of behaviour.
Earlier theories
Medications can adjust the release of brain neurotransmitters in cases of depression, anxiety disorder, schizophrenia and other mental disorders because an imbalance within neurotransmitter systems can emerge as consistent characteristics in behaviour compromising people's lives. All people have a weaker form of such imbalance in at least one of such neurotransmitter systems that make each of us distinct from one another. The impact of this weak imbalance in neurochemistry can be seen in the consistent features of behaviour in healthy people (temperament). In this sense temperament (as neuro-chemically-based individual differences) and mental illness represents varying degrees along the same continuum of neurotransmitter imbalance in neurophysiological systems of behavioural regulation.
In fact, multiple temperament traits (such as Impulsivity, sensation seeking, neuroticism, endurance, plasticity, sociability or extraversion) have been linked to brain neurotransmitters and hormone systems.
By the end of the 20th century, it became clear that the human brain operates with more than a dozen neurotransmitters and a large number of neuropeptides and hormones. The relationships between these different chemical systems are complex as some of them suppress and some of them induce each other's release during neuronal exchanges. This complexity of relationships devalues the old approach of assigning "inhibitory vs. excitatory" roles to neurotransmitters: the same neurotransmitters can be either inhibitory or excitatory depending on what system they interact with. It became clear that an impressive diversity of neurotransmitters and their receptors is necessary to meet a wide range of behavioural situations, but the links between temperament traits and specific neurotransmitters are still a
Document 4:::
The Society for Behavioral Neuroendocrinology is an interdisciplinary scientific organization dedicated to the study of hormonal processes and neuroendocrine systems that regulate behavior.
Publications
SBN publishes the scientific journal Hormones and Behavior.
External links
Neuroscience organizations
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Many adolescents experience frequent mood swings. name one of the causes for this.
A. psychological changes
B. surging hormones
C. growing hormones
D. maturing nervous system
Answer:
|
|
sciq-5137
|
multiple_choice
|
Because sound waves must move through a medium, there are no sound waves in a what?
|
[
"vacuum",
"liquid",
"solid",
"gas"
] |
A
|
Relavent Documents:
Document 0:::
In physics, sound is a vibration that propagates as an acoustic wave through a transmission medium such as a gas, liquid or solid.
In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Only acoustic waves that have frequencies lying between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory percept in humans. In air at atmospheric pressure, these represent sound waves with wavelengths of to . Sound waves above 20 kHz are known as ultrasound and are not audible to humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges.
Acoustics
Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gasses, liquids, and solids including vibration, sound, ultrasound, and infrasound. A scientist who works in the field of acoustics is an acoustician, while someone working in the field of acoustical engineering may be called an acoustical engineer. An audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound.
Applications of acoustics are found in almost all aspects of modern society, subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electro-acoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration.
Definition
Sound is defined as "(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a)." Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound
Document 1:::
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean
Document 2:::
Acoustic waves are a type of energy propagation through a medium by means of adiabatic loading and unloading. Important quantities for describing acoustic waves are acoustic pressure, particle velocity, particle displacement and acoustic intensity. Acoustic waves travel with a characteristic acoustic velocity that depends on the medium they're passing through. Some examples of acoustic waves are audible sound from a speaker (waves traveling through air at the speed of sound), seismic waves (ground vibrations traveling through the earth), or ultrasound used for medical imaging (waves traveling through the body).
Wave properties
Acoustic wave is a mechanical wave that transmits energy through the movements of atoms and molecules. Acoustic wave transmits through liquids in longitudinal manner (movement of particles are parallel to the direction of propagation of the wave); in contrast to electromagnetic wave that transmits in transverse manner (movement of particles at a right angle to the direction of propagation of the wave). However, in solids, acoustic wave transmits in both longitudinal and transverse manners due to presence of shear moduli in such a state of matter.
Acoustic wave equation
The acoustic wave equation describes the propagation of sound waves. The acoustic wave equation for sound pressure in one dimension is given by
where
is sound pressure in Pa
is position in the direction of propagation of the wave, in m
is speed of sound in m/s
is time in s
The wave equation for particle velocity has the same shape and is given by
where
is particle velocity in m/s
For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article.
D'Alembert gave the general solution for the lossless wave equation. For sound pressure, a solution would be
where
is angu
Document 3:::
This is a list of wave topics.
0–9
21 cm line
A
Abbe prism
Absorption spectroscopy
Absorption spectrum
Absorption wavemeter
Acoustic wave
Acoustic wave equation
Acoustics
Acousto-optic effect
Acousto-optic modulator
Acousto-optics
Airy disc
Airy wave theory
Alfvén wave
Alpha waves
Amphidromic point
Amplitude
Amplitude modulation
Animal echolocation
Antarctic Circumpolar Wave
Antiphase
Aquamarine Power
Arrayed waveguide grating
Artificial wave
Atmospheric diffraction
Atmospheric wave
Atmospheric waveguide
Atom laser
Atomic clock
Atomic mirror
Audience wave
Autowave
Averaged Lagrangian
B
Babinet's principle
Backward wave oscillator
Bandwidth-limited pulse
beat
Berry phase
Bessel beam
Beta wave
Black hole
Blazar
Bloch's theorem
Blueshift
Boussinesq approximation (water waves)
Bow wave
Bragg diffraction
Bragg's law
Breaking wave
Bremsstrahlung, Electromagnetic radiation
Brillouin scattering
Bullet bow shockwave
Burgers' equation
Business cycle
C
Capillary wave
Carrier wave
Cherenkov radiation
Chirp
Ernst Chladni
Circular polarization
Clapotis
Closed waveguide
Cnoidal wave
Coherence (physics)
Coherence length
Coherence time
Cold wave
Collimated light
Collimator
Compton effect
Comparison of analog and digital recording
Computation of radiowave attenuation in the atmosphere
Continuous phase modulation
Continuous wave
Convective heat transfer
Coriolis frequency
Coronal mass ejection
Cosmic microwave background radiation
Coulomb wave function
Cutoff frequency
Cutoff wavelength
Cymatics
D
Damped wave
Decollimation
Delta wave
Dielectric waveguide
Diffraction
Direction finding
Dispersion (optics)
Dispersion (water waves)
Dispersion relation
Dominant wavelength
Doppler effect
Doppler radar
Douglas Sea Scale
Draupner wave
Droplet-shaped wave
Duhamel's principle
E
E-skip
Earthquake
Echo (phenomenon)
Echo sounding
Echolocation (animal)
Echolocation (human)
Eddy (fluid dynamics)
Edge wave
Eikonal equation
Ekman layer
Ekman spiral
Ekman transport
El Niño–Southern Oscillation
El
Document 4:::
The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. At , the speed of sound in air is about , or one kilometre in or one mile in . It depends strongly on temperature as well as the medium through which a sound wave is propagating. At , the speed of sound in air is about . More simply, the speed of sound is how fast vibrations travel.
The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.
In colloquial speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: typically, sound travels most slowly in gases, faster in liquids, and fastest in solids. For example, while sound travels at in air, it travels at in water (almost 4.3 times as fast) and at in iron (almost 15 times as fast). In an exceptionally stiff material such as diamond, sound travels at , about 35 times its speed in air and about the fastest it can travel under normal conditions.
In theory, the speed of sound is actually the speed of vibrations.
Sound waves in solids are composed of compression waves (just as in gases and liquids) and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds than compression waves, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus, and density. The speed of shear waves is determined only by the solid material's shear modulus and density.
In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound (in the same medium) is called the object's Mach number. Objects moving at speeds g
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because sound waves must move through a medium, there are no sound waves in a what?
A. vacuum
B. liquid
C. solid
D. gas
Answer:
|
|
sciq-9663
|
multiple_choice
|
What is the science of classifying the many organisms on earth called?
|
[
"methodology",
"general classification",
"terminology",
"taxonomy"
] |
D
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the science of classifying the many organisms on earth called?
A. methodology
B. general classification
C. terminology
D. taxonomy
Answer:
|
|
sciq-957
|
multiple_choice
|
Due to the difference in the distribution of charge, water is what type of molecule?
|
[
"uneven",
"crooked",
"polar",
"ionic"
] |
C
|
Relavent Documents:
Document 0:::
Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other.
Physical properties
Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue.
Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseou
Document 1:::
A Piper diagram is a graphic procedure proposed by Arthur M. Piper in 1944 for presenting water chemistry data to help in understanding the sources of the dissolved constituent salts in water. This procedure is based on the premise that cations and anions in water are in such amounts to assure the electroneutrality of the dissolved salts, in other words the algebraic sum of the electric charges of cations and anions is zero.
A Piper diagram is a graphical representation of the chemistry of a water sample or samples.
The cations and anions are shown by separate ternary plots. The apexes of the cation plot are calcium, magnesium and sodium plus potassium cations. The apexes of the anion plot are sulfate, chloride and carbonate plus hydrogen carbonate anions. The two ternary plots are then projected onto a diamond. The diamond is a matrix transformation of a graph of the anions (sulfate + chloride/ total anions) and cations (sodium + potassium/total cations).
The required matrix transformation of the anion/cation graph is:
The Piper diagram is suitable for comparing the ionic composition of a set of water samples, but does not lend itself to spatial comparisons. For geographical applications, the Stiff diagram and Maucha diagram are more applicable, because they can be used as markers on a map. Colour coding of the background of the Piper diagram allows linking Piper Diagrams and maps
Water samples shown on the Piper diagram can be grouped in hydrochemical facies. The cation and anion triangles can be separated in regions based on the dominant cation(s) or anion(s) and their combination creates regions in the diamond shaped part of the diagram.
See also
Ternary diagram, just one triangle
QAPF diagram, a common application
Document 2:::
The oxhydroelectric effect consists in the generation of voltage and electric current in pure liquid water, without any electrolyte, upon exposure to electromagnetic radiation in the infrared range, after creating a physical (not chemical) asymmetry in liquid water e.g. thanks to a strongly hydrophile polymer, such as Nafion.
Since the publication of the first seminal research, other independent research has been published, which refer to this effect, in scientific peer reviewed, reputable journals (with impact factors higher than the median in the respective fields).
The system can be described as a photovoltaic cell operating in the infrared electromagnetic range, based on liquid water instead of a semiconductor.
Theoretical model
The model proposed by Roberto Germano and his collaborators, who have first observed the effect is based on the known concept of the exclusion zone.
The first observations of a different behaviour of water molecules close to the walls of its container date back to late ‘60s and early ‘70s, when Drost-Hansen, upon reviewing many experimental articles, came to the conclusion that interfacial water shows structural difference with respect to the bulk liquid water.
In 2006 Gerald Pollack published a seminal work on the exclusion zone and those observations were subsequently reported by several other groups, in which a hydrophilic material creates a coherent water region at the boundary between its surface and the water.
Further elaborating on the work of Pollack, the model describes liquid water as a system made of two phases: a matrix of non-coherent water molecules hosting many “Coherence Domains” (CDs), about 0.1 um in size, found in the exclusion zone, but also in the bulk volume.
In this model the behaviour of the coherence domains is also considered as the cause for the formation of xerosydryle.
The two phases, are characterized by different thermodynamic parameters, and are in a stable non-equilibrium state.
The coherent
Document 3:::
In chemistry, the Bates–Guggenheim Convention refers to a conventional method based on the Debye–Hückel theory to determine pH standard values.
Document 4:::
Ethanol precipitation is a method used to purify and/or concentrate RNA, DNA, and polysaccharides such as pectin and xyloglucan from aqueous solutions by adding ethanol as an antisolvent.
DNA precipitation
Theory
DNA is polar due to its highly charged phosphate backbone. Its polarity makes it water-soluble (water is polar) according to the principle "like dissolves like".
Because of the high polarity of water, illustrated by its high dielectric constant of 80.1 (at 20 °C), electrostatic forces between charged particles are considerably lower in aqueous solution than they are in a vacuum or in air.
This relation is reflected in Coulomb's law, which can be used to calculate the force acting on two charges and separated by a distance by using the dielectric constant (also called relative static permittivity) of the medium in the denominator of the equation ( is an electric constant):
At an atomic level, the reduction in the force acting on a charge results from water molecules forming a hydration shell around it. This fact makes water a very good solvent for charged compounds like salts. Electric force which normally holds salt crystals together by way of ionic bonds is weakened in the presence of water allowing ions to separate from the crystal and spread through solution.
The same mechanism operates in the case of negatively charged phosphate groups on a DNA backbone: even though positive ions are present in solution, the relatively weak net electrostatic force prevents them from forming stable ionic bonds with phosphates and precipitating out of solution.
Ethanol is much less polar than water, with a dielectric constant of 24.3 (at 25 °C). This means that adding ethanol to solution disrupts the screening of charges by water. If enough ethanol is added, the electrical attraction between phosphate groups and any positive ions present in solution becomes strong enough to form stable ionic bonds and DNA precipitation. This usually happens when ethanol compo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Due to the difference in the distribution of charge, water is what type of molecule?
A. uneven
B. crooked
C. polar
D. ionic
Answer:
|
|
sciq-5976
|
multiple_choice
|
Which proteins recognize and combine with harmful materials, including both toxic chemicals and invasive microorganisms?
|
[
"antibodies",
"collagen",
"enzymes",
"essential amino acids"
] |
A
|
Relavent Documents:
Document 0:::
Contamination is the presence of a constituent, impurity, or some other undesirable element that spoils, corrupts, infects, makes unfit, or makes inferior a material, physical body, natural environment, workplace, etc.
Types of contamination
Within the sciences, the word "contamination" can take on a variety of subtle differences in meaning, whether the contaminant is a solid or a liquid, as well as the variance of environment the contaminant is found to be in. A contaminant may even be more abstract, as in the case of an unwanted energy source that may interfere with a process. The following represent examples of different types of contamination based on these and other variances.
Chemical contamination
In chemistry, the term "contamination" usually describes a single constituent, but in specialized fields the term can also mean chemical mixtures, even up to the level of cellular materials. All chemicals contain some level of impurity. Contamination may be recognized or not and may become an issue if the impure chemical causes additional chemical reactions when mixed with other chemicals or mixtures. Chemical reactions resulting from the presence of an impurity may at times be beneficial, in which case the label "contaminant" may be replaced with "reactant" or "catalyst." (This may be true even in physical chemistry, where, for example, the introduction of an impurity in an intrinsic semiconductor positively increases conductivity.) If the additional reactions are detrimental, other terms are often applied such as "toxin", "poison", or pollutant, depending on the type of molecule involved. Chemical decontamination of substance can be achieved through decomposition, neutralization, and physical processes, though a clear understanding of the underlying chemistry is required. Contamination of pharmaceutics and therapeutics is notoriously dangerous and creates both perceptual and technical challenges.
Environmental contamination
In environmental chemistry, the term
Document 1:::
Proteins are a class of biomolecules composed of amino acid chains.
Biochemistry
Antifreeze protein, class of polypeptides produced by certain fish, vertebrates, plants, fungi and bacteria
Conjugated protein, protein that functions in interaction with other chemical groups attached by covalent bonds
Denatured protein, protein which has lost its functional conformation
Matrix protein, structural protein linking the viral envelope with the virus core
Protein A, bacterial surface protein that binds antibodies
Protein A/G, recombinant protein that binds antibodies
Protein C, anticoagulant
Protein G, bacterial surface protein that binds antibodies
Protein L, bacterial surface protein that binds antibodies
Protein S, plasma glycoprotein
Protein Z, glycoprotein
Protein catabolism, the breakdown of proteins into amino acids and simple derivative compounds
Protein complex, group of two or more associated proteins
Protein electrophoresis, method of analysing a mixture of proteins by means of gel electrophoresis
Protein folding, process by which a protein assumes its characteristic functional shape or tertiary structure
Protein isoform, version of a protein with some small differences
Protein kinase, enzyme that modifies other proteins by chemically adding phosphate groups to them
Protein ligands, atoms, molecules, and ions which can bind to specific sites on proteins
Protein microarray, piece of glass on which different molecules of protein have been affixed at separate locations in an ordered manner
Protein phosphatase, enzyme that removes phosphate groups that have been attached to amino acid residues of proteins
Protein purification, series of processes intended to isolate a single type of protein from a complex mixture
Protein sequencing, protein method
Protein splicing, intramolecular reaction of a particular protein in which an internal protein segment is removed from a precursor protein
Protein structure, unique three-dimensional shape of amino
Document 2:::
The following is a partial list of the "D" codes for Medical Subject Headings (MeSH), as defined by the United States National Library of Medicine (NLM).
This list continues the information at List of MeSH codes (D12.644). Codes following these are found at List of MeSH codes (D13). For other MeSH codes, see List of MeSH codes.
The source for this content is the set of 2006 MeSH Trees from the NLM.
– proteins
– albumins
– c-reactive protein
– conalbumin
– lactalbumin
– ovalbumin
– avidin
– parvalbumins
– ricin
– serum albumin
– methemalbumin
– prealbumin
– serum albumin, bovine
– serum albumin, radio-iodinated
– technetium tc 99m aggregated albumin
– algal proteins
– amphibian proteins
– xenopus proteins
– amyloid
– amyloid beta-protein
– amyloid beta-protein precursor
– serum amyloid a protein
– serum amyloid p-component
– antifreeze proteins
– antifreeze proteins, type i
– antifreeze proteins, type ii
– antifreeze proteins, type iii
– antifreeze proteins, type iv
– apoproteins
– apoenzymes
– apolipoproteins
– apolipoprotein A
– apolipoprotein A1
– apolipoprotein A2
– apolipoprotein B
– apolipoprotein C
– apolipoprotein E
– aprotinin
– archaeal proteins
– bacteriorhodopsins
– dna topoisomerases, type i, archaeal
– halorhodopsins
– periplasmic proteins
– armadillo domain proteins
– beta-catenin
– gamma catenin
– plakophilins
– avian proteins
– bacterial proteins
See List of MeSH codes (D12.776.097).
– blood proteins
See List of MeSH codes (D12.776.124).
– carrier proteins
See List of MeSH codes (D12.776.157).
– cell cycle proteins
– cdc25 phosphatase
– cellular apoptosis susceptibility protein
– cullin proteins
– cyclin-dependent kinase inhibitor proteins
– cyclin-dependent kinase inhibitor p15
– cyclin-dependent kinase inhibitor p16
– cyclin-dependent kinase inhibitor p18
Document 3:::
Major urinary proteins (Mups), also known as α2u-globulins, are a subfamily of proteins found in abundance in the urine and other secretions of many animals. Mups provide a small range of identifying information about the donor animal, when detected by the vomeronasal organ of the receiving animal. They belong to a larger family of proteins known as lipocalins. Mups are encoded by a cluster of genes, located adjacent to each other on a single stretch of DNA, that varies greatly in number between species: from at least 21 functional genes in mice to none in humans. Mup proteins form a characteristic glove shape, encompassing a ligand-binding pocket that accommodates specific small organic chemicals.
Urinary proteins were first reported in rodents in 1932, during studies by Thomas Addis into the cause of proteinuria. They are potent human allergens and are largely responsible for a number of animal allergies, including to cats, horses and rodents. Their endogenous function within an animal is unknown but may involve regulating energy expenditure. However, as secreted proteins they play multiple roles in chemical communication between animals, functioning as pheromone transporters and stabilizers in rodents and pigs. Mups can also act as protein pheromones themselves. They have been demonstrated to promote aggression in male mice, and one specific Mup protein found in male mouse urine is sexually attractive to female mice. Mups can also function as signals between different species: mice display an instinctive fear response on the detection of Mups derived from predators such as cats and rats.
Discovery
Humans in good health excrete urine that is largely free of protein. Therefore, since 1827 physicians and scientists have been interested in proteinuria, the excess of protein in human urine, as an indicator of kidney disease. To better understand the etiology of proteinuria, some scientists attempted to study the phenomenon in laboratory animals. Between 1932 and 19
Document 4:::
A biomolecule or biological molecule is a loosely used term for molecules present in organisms that are essential to one or more typically biological processes, such as cell division, morphogenesis, or development. Biomolecules include the primary metabolites which are large macromolecules (or polyelectrolytes) such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A more general name for this class of material is biological materials. Biomolecules are an important element of living organisms, those biomolecules are often endogenous, produced within the organism but organisms usually need exogenous biomolecules, for example certain nutrients, to survive.
Biology and its subfields of biochemistry and molecular biology study biomolecules and their reactions. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts.
The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory.
Types of biomolecules
A diverse range of biomolecules exist, including:
Small molecules:
Lipids, fatty acids, glycolipids, sterols, monosaccharides
Vitamins
Hormones, neurotransmitters
Metabolites
Monomers, oligomers and polymers:
Nucleosides and nucleotides
Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T).
Nucleosides can be phosphorylated by specific kinases in the cell, producing nucl
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which proteins recognize and combine with harmful materials, including both toxic chemicals and invasive microorganisms?
A. antibodies
B. collagen
C. enzymes
D. essential amino acids
Answer:
|
|
sciq-3461
|
multiple_choice
|
What occurs when substances move from areas of lower to higher concentration or when very large molecules are transported?
|
[
"migration",
"diffusion",
"passive transport",
"active transport"
] |
D
|
Relavent Documents:
Document 0:::
Transcellular transport involves the transportation of solutes by a cell through a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Active Transport
Main article: Active transport
Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. There are two types of active transport, primary active transport and secondary active transport. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. Examples of molecules that follow this process are potassium K+, sodium Na+, and calcium Ca2+. A place in the human body where this occurs is in the intestines with the uptake of glucose. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration. An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT).
Passive Transport
Main article: Passive transport
Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. There are two types of passive transport, passive diffusion and facilitated diffusion. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. One example of passive diffusion is the gas exchange that occurs between the oxygen in the blood and the carbon dioxide present in the lungs. Facilitated diffusion is the movement of polar molecules down the concentration gradient with the assistance of membrane proteins. Since the molecules associated with facilitated diffusion are polar, they are repelled by the hydrophobic sections of permeable membrane, therefore they need to be assisted by the membrane proteins. Both t
Document 1:::
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
Passive transport follows Fick's first law.
Diffusion
Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient").
However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorph
Document 2:::
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles:
Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid);
Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface);
Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex.
The reverse of sorption is desorption.
Sorption rate
The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion.
See also
Sorption isotherm
Document 3:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 4:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs when substances move from areas of lower to higher concentration or when very large molecules are transported?
A. migration
B. diffusion
C. passive transport
D. active transport
Answer:
|
|
sciq-5435
|
multiple_choice
|
What is the second line of defense?
|
[
"fight or flight",
"inflammatory response",
"immune response",
"rejection of foreign bodies"
] |
B
|
Relavent Documents:
Document 0:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 3:::
An immune response is a physiological reaction which occurs within an organism in the context of inflammation for the purpose of defending against exogenous factors. These include a wide variety of different toxins, viruses, intra- and extracellular bacteria, protozoa, helminths, and fungi which could cause serious problems to the health of the host organism if not cleared from the body.
In addition, there are other forms of immune response. For example, harmless exogenous factors (such as pollen and food components) can trigger allergy; latex and metals are also known allergens.
A transplanted tissue (for example, blood) or organ can cause graft-versus-host disease. A type of immune reactivity known as Rh disease can be observed in pregnant women. These special forms of immune response are classified as hypersensitivity. Another special form of immune response is antitumor immunity.
In general, there are two branches of the immune response, the innate and the adaptive, which work together to protect against pathogens. Both branches engage humoral and cellular components.
The innate branch—the body's first reaction to an invader—is known to be a non-specific and quick response to any sort of pathogen. Components of the innate immune response include physical barriers like the skin and mucous membranes, immune cells such as neutrophils, macrophages, and monocytes, and soluble factors including cytokines and complement. On the other hand, the adaptive branch is the body's immune response which is catered against specific antigens and thus, it takes longer to activate the components involved. The adaptive branch include cells such as dendritic cells, T cell, and B cells as well as antibodies—also known as immunoglobulins—which directly interact with antigen and are a very important component for a strong response against an invader.
The first contact that an organism has with a particular antigen will result in the production of effector T and B cells which are act
Document 4:::
Lymphoproliferative response is a specific immune response that entails rapid T-cell replication. Standard antigens, such as tetanus toxoid, that elicit this response are used in lab tests of immune competence.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the second line of defense?
A. fight or flight
B. inflammatory response
C. immune response
D. rejection of foreign bodies
Answer:
|
|
sciq-3929
|
multiple_choice
|
What is the basic unit of structure and function of living things?
|
[
"molecule",
"atom",
"particle",
"cell"
] |
D
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the basic unit of structure and function of living things?
A. molecule
B. atom
C. particle
D. cell
Answer:
|
|
sciq-10741
|
multiple_choice
|
What is the term for the deepest places on earth?
|
[
"trenches",
"mines",
"the core",
"tunnels"
] |
A
|
Relavent Documents:
Document 0:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 1:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 2:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the deepest places on earth?
A. trenches
B. mines
C. the core
D. tunnels
Answer:
|
|
sciq-7846
|
multiple_choice
|
Extrusive igneous rocks are also called what?
|
[
"metamorphic",
"agates",
"magma minerals",
"volcanic rocks"
] |
D
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate).
Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions.
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep.
This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle.
Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Characteristics
Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure r
Document 2:::
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle.
Transition to igneous rock
When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks.
Secondary changes
Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
Document 3:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 4:::
Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline.
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555.
Geology
Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope." It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable.
Perthite is either microcline or orthoclase with thin lamellae of exsolved albite.
Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region.
The largest documented single crystals of microcline were found in Devils Hole Beryl Mine, Colorado, US and measured ~50x36x14 m. This could be one of the largest crystals of any material found so far.
Microcline is commonly used for the manufacturing of porcelain.
As food additive
The chemical compound name is potassium aluminium silicate, and it
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Extrusive igneous rocks are also called what?
A. metamorphic
B. agates
C. magma minerals
D. volcanic rocks
Answer:
|
|
sciq-3269
|
multiple_choice
|
What changes water vapor to liquid water?
|
[
"condensation",
"global warming",
"fermentation",
"combustion"
] |
A
|
Relavent Documents:
Document 0:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 1:::
Moisture expansion is the tendency of matter to change in volume in response to a change in moisture content. The macroscopic effect is similar to that of thermal expansion but the microscopic causes are very different. Moisture expansion is caused by hygroscopy.
Matter
Document 2:::
Vaporization (or vaporisation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: Evaporation and boiling. Evaporation is a surface phenomenon, where as boiling is a bulk phenomenon.
Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid.
Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment.
Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Because it does not involve the liquid phase, it is not a form of vaporization.
The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO.
At the moment o
Document 3:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 4:::
Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applicatio
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What changes water vapor to liquid water?
A. condensation
B. global warming
C. fermentation
D. combustion
Answer:
|
|
sciq-56
|
multiple_choice
|
The angle at which light bends when it enters a different medium is known as what?
|
[
"bounce",
"frequency",
"resonance",
"refraction"
] |
D
|
Relavent Documents:
Document 0:::
Treatise on Light: In Which Are Explained the Causes of That Which Occurs in Reflection & Refraction (: Où Sont Expliquées les Causes de ce qui Luy Arrive Dans la Reflexion & Dans la Refraction) is a book written by Dutch polymath Christiaan Huygens that was published in French in 1690. The book describes Huygens's conception of the nature of light propagation which makes it possible to explain the laws of geometrical optics shown in Descartes's Dioptrique, which Huygens aimed to replace.
Unlike Newton's corpuscular theory, which was presented in the Opticks, Huygens conceived of light as an irregular series of shock waves which proceeds with very great, but finite, velocity through the aether, similar to sound waves. Moreover, he proposed that each point of a wavefront is itself the origin of a secondary spherical wave, a principle known today as the Huygens–Fresnel principle. The book is considered a pioneering work of theoretical and mathematical physics and the first mechanistic account of an unobservable physical phenomenon.
Overview
Huygens worked on the mathematics of light rays and the properties of refraction in his work Dioptrica, which began in 1652 but remained unpublished, and which predated his lens grinding work. In 1672, the problem of the strange refraction of the Iceland crystal created a puzzle regarding the physics of refraction that Huygens wanted to solve. Huygens eventually was able to solve this problem by means of elliptical waves in 1677 and confirmed his theory by experiments mostly after critical reactions in 1679.
His explanation of birefringence was based on three hypotheses: (1) There are inside the crystal two media in which light waves proceed, (2) one medium behaves as ordinary ether and carries the normally refracted ray, and (3) the velocity of the waves in the other medium is dependent on direction, so that the waves do not expand in spherical form, but rather as ellipsoids of revolution; this second medium carries the abnorm
Document 1:::
Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
W
Z
See also
:Category:Optical components
:Category:Optical materials
Document 2:::
Archaeo-optics, or archaeological optics, is the study of the experience and ritual use of light by ancient peoples. Archaeological optics is a branch of sensory archaeology, which explores human perceptions of the physical environment in the remote past, and is a sibling of archaeoastronomy, which deals with ancient observations of celestial bodies, and archaeological acoustics, which deals with applications of sound.
Research by several investigators around the world has uncovered how ancient peoples encountered and used the camera obscura principle for a variety of purposes. In a darkened chamber, light passing through a small opening can create haunting and ephemeral moving images, which could have triggered and reinforced ground breaking modes of thought, forms of representation, and belief in otherworldly realms.
Optical basis
Light
Visible light, a small portion of the electromagnetic spectrum, encompasses wavelengths between 380-750 nanometers, which humans perceive as the colors of the spectrum: red, orange, yellow, green, blue, indigo, and violet. Light behaves according to a well-defined set of rules: it travels in straight lines, unless otherwise refracted or reflected by another object, or curved by gravity.
Vision
An eye is essentially a darkened chamber with a small hole in front that allows light to enter. The lens, just behind the pupil's aperture, is perfectly clear but appears black because the interior space behind it is dark. Rays of light pass through the lens, producing an upside-down image on the retina. The brain reorients the image. This optical process of projecting an inverted image is known as a camera obscura (from the Latin, meaning dark room). The first pinhole/camera obscura eyes evolved about 540 million years ago on a sea mollusk, known as a nautilus, during the Cambrian period. The camera obscura principle is primordial, and life on earth has evolved to take advantage of it.
Camera obscura
The camera obscura principle is
Document 3:::
The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling.
Imaging
The imaging process is a mapping of an object to an image plane. Each point on the image corresponds to a point on the object. An illuminated object will scatter light toward a lens and the lens will collect and focus the light to create the image. The ratio of the height of the image to the height of the object is the magnification. The spatial extent of the image surface and the focal length of the lens determines the field of view of the lens. Image formation of mirror these have a center of curvature and its focal length of the mirror is half of the center of curvature.
Illumination
An object may be illuminated by the light from an emitting source such as the sun, a light bulb or a Light Emitting Diode. The light incident on the object is reflected in a manner dependent on the surface properties of the object. For rough surfaces, the reflected light is scattered in a manner described by the Bi-directional Reflectance Distribution Function (BRDF) of the surface. The BRDF of a surface is the ratio of the exiting power per square meter per steradian (radiance) to the incident power per square meter (irradiance). The BRDF typically varies with angle and may vary with wavelength, but a specific important case is a surface that has constant BRDF. This surface type is referred to as Lambertian and the magnitude of the BRDF is R/π, where R is the reflectivity of the surface. The portion of scattered light that propagates toward the lens is collected by the entrance pupil of the imaging lens over the field of view.
Field of view and imagery
The Field of view of a lens is limited by the size of the image plane and the focal length of the lens. The relationship between a location on the image and a location on t
Document 4:::
In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Ray tracing solves the problem by repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analysis can be performed by using a computer to propagate many rays.
When applied to problems of electromagnetic radiation, ray tracing often relies on approximate solutions to Maxwell's equations that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray theory does not describe phenomena such as interference and diffraction, which require wave theory (involving the phase of the wave).
Technique
Ray tracing works by assuming that the particle or wave can be modeled as a large number of very narrow beams (rays), and that there exists some distance, possibly very small, over which such a ray is locally straight. The ray tracer will advance the ray over this distance, and then use a local derivative of the medium to calculate the ray's new direction. From this location, a new ray is sent out and the process is repeated until a complete path is generated. If the simulation includes solid objects, the ray may be tested for intersection with them at each step, making adjustments to the ray's direction if a collision is found. Other properties of the ray may be altered as the simulation advances as well, such as intensity, wavelength, or polarization. This process is repeated with as many rays as are necessary to understand the behavior of the system.
Uses
Astronomy
Ray tracing is being increasingly used in astronomy to simulate realistic images of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The angle at which light bends when it enters a different medium is known as what?
A. bounce
B. frequency
C. resonance
D. refraction
Answer:
|
|
sciq-7797
|
multiple_choice
|
What is the term for the length of the route between two points?
|
[
"trajectory",
"length",
"area",
"distance"
] |
D
|
Relavent Documents:
Document 0:::
Sinuosity, sinuosity index, or sinuosity coefficient of a continuously differentiable curve having at least one inflection point is the ratio of the curvilinear length (along the curve) and the Euclidean distance (straight line) between the end points of the curve. This dimensionless quantity can also be rephrased as the "actual path length" divided by the "shortest path length" of a curve.
The value ranges from 1 (case of straight line) to infinity (case of a closed loop, where the shortest path length is zero or for an infinitely-long actual path).
Interpretation
The curve must be continuous (no jump) between the two ends. The sinuosity value is really significant when the line is continuously differentiable (no angular point). The distance between both ends can also be evaluated by a plurality of segments according to a broken line passing through the successive inflection points (sinuosity of order 2).
The calculation of the sinuosity is valid in a 3-dimensional space (e.g. for the central axis of the small intestine), although it is often performed in a plane (with then a possible orthogonal projection of the curve in the selected plan; "classic" sinuosity on the horizontal plane, longitudinal profile sinuosity on the vertical plane).
The classification of a sinuosity (e.g. strong / weak) often depends on the cartographic scale of the curve (see the coastline paradox for further details) and of the object velocity which flowing therethrough (river, avalanche, car, bicycle, bobsleigh, skier, high speed train, etc.): the sinuosity of the same curved line could be considered very strong for a high speed train but low for a river. Nevertheless, it is possible to see a very strong sinuosity in the succession of few river bends, or of laces on some mountain roads.
Notable values
The sinuosity S of:
2 inverted continuous semicircles located in the same plane is . It is independent of the circle radius;
a sine function (over a whole number n of half-periods), wh
Document 1:::
In Euclidean geometry, the distance from a point to a line is the shortest distance from a given point to any point on an infinite straight line. It is the perpendicular distance of the point to the line, the length of the line segment which joins the point to nearest point on the line. The algebraic expression for calculating it can be derived and expressed in several ways.
Knowing the distance from a point to a line can be useful in various situations—for example, finding the shortest distance to reach a road, quantifying the scatter on a graph, etc. In Deming regression, a type of linear curve fitting, if the dependent and independent variables have equal variance this results in orthogonal regression in which the degree of imperfection of the fit is measured for each data point as the perpendicular distance of the point from the regression line.
Line defined by an equation
In the case of a line in the plane given by the equation , where , and are real constants with and not both zero, the distance from the line to a point is
The point on this line which is closest to has coordinates:
Horizontal and vertical lines
In the general equation of a line, , and cannot both be zero unless is also zero, in which case the equation does not define a line. If and , the line is horizontal and has equation . The distance from to this line is measured along a vertical line segment of length in accordance with the formula. Similarly, for vertical lines (b = 0) the distance between the same point and the line is , as measured along a horizontal line segment.
Line defined by two points
If the line passes through two points and then the distance of from the line is:
The denominator of this expression is the distance between and . The numerator is twice the area of the triangle with its vertices at the three points, , and . See: . The expression is equivalent to , which can be obtained by rearranging the standard formula for the area of a triangle: , where
Document 2:::
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
Document 3:::
In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points.
It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore occasionally being called the Pythagorean distance. These names come from the ancient Greek mathematicians Euclid and Pythagoras, although Euclid did not represent distances as numbers, and the connection from the Pythagorean theorem to distance calculation was not made until the 18th century.
The distance between two objects that are not points is usually defined to be the smallest distance among pairs of points from the two objects. Formulas are known for computing distances between different types of objects, such as the distance from a point to a line. In advanced mathematics, the concept of distance has been generalized to abstract metric spaces, and other distances than Euclidean have been studied. In some applications in statistics and optimization, the square of the Euclidean distance is used instead of the distance itself.
Distance formulas
One dimension
The distance between any two points on the real line is the absolute value of the numerical difference of their coordinates, their absolute difference. Thus if and are two points on the real line, then the distance between them is given by:
A more complicated formula, giving the same value, but generalizing more readily to higher dimensions, is:
In this formula, squaring and then taking the square root leaves any positive number unchanged, but replaces any negative number by its absolute value.
Two dimensions
In the Euclidean plane, let point have Cartesian coordinates and let point have coordinates . Then the distance between and is given by:
This can be seen by applying the Pythagorean theorem to a right triangle with horizontal and vertical sides, having the line segment from to as its hypotenuse. The two squared formulas inside the square root give t
Document 4:::
Interdimensional may refer to:
Interdimensional hypothesis
Interdimensional doorway
Interdimensional travel
Dimension
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the length of the route between two points?
A. trajectory
B. length
C. area
D. distance
Answer:
|
|
sciq-2599
|
multiple_choice
|
What can echinoderms sense with their simple eyes?
|
[
"light",
"electricity",
"colors",
"shapes"
] |
A
|
Relavent Documents:
Document 0:::
A Knollenorgan is an electroreceptor in the skin of weakly electric fish of the family Mormyridae (Elephantfish) from Africa. The structure was first described by Viktor Franz (1921), a German anatomist unaware of its function. They are named after "Knolle", German for "tuberous root" which describes their structure.
Structure and function
Knollenorgans contain modified epithelial cells that act as sensory transducers for electric fields. Besides these, there are supporting cells and a sensory neuron. The neuron projects to the fish's brain, specifically to the nucleus of the electrosensory lateral line lobe (nELL) of the medulla via the posterior branch of the lateral line nerve.
The organs are embedded in the thickened epidermis. The receptor cells lie buried in the deeper layers of the epidermis, where they expand into a pocket in the superficial layers of the corium. The sense organ is surrounded by a basement membrane which separates corium from epidermis. Epithelial cells form a loose plug over the sensory receptors, allowing capacity-coupled current to pass from the external environment to the sensory receptor.
Knollenorgans lack the jelly-filled canal leading from sensory receptor cells to the external environment characteristic of the Ampullae of Lorenzini found in sharks and other basal groups of fishes. Knollenorgans are sensitive to electrical stimuli at frequences between 20 hertz and 20 kilohertz, with electric fields as small as 0.1 millivolt per centimetre. They are used to detect the weak electric organ discharges of other electric fish, usually of their own species.
See also
Ampullae of Lorenzini – the ancestral type of electroreceptor in vertebrates
Document 1:::
Cephalization is an evolutionary trend in which, over many generations, the mouth, sense organs, and nerve ganglia become concentrated at the front end of an animal, producing a head region. This is associated with movement and bilateral symmetry, such that the animal has a definite head end. This led to the formation of a highly sophisticated brain in three groups of animals, namely the arthropods, cephalopod molluscs, and vertebrates.
Animals without bilateral symmetry
Cnidaria, such as the radially symmetrical Hydrozoa, show some degree of cephalization. The Anthomedusae have a head end with their mouth, photoreceptive cells, and a concentration of neural cells.
Bilateria
Cephalization is a characteristic feature of the Bilateria, a large group containing the majority of animal phyla. These have the ability to move, using muscles, and a body plan with a front end that encounters stimuli first as the animal moves forwards, and accordingly has evolved to contain many of the body's sense organs, able to detect light, chemicals, and gravity. There is often also a collection of nerve cells able to process the information from these sense organs, forming a brain in several phyla and one or more ganglia in others.
Acoela
The Acoela are basal bilaterians, part of the Xenacoelomorpha. They are small and simple animals, and have very slightly more nerve cells at the head end than elsewhere, not forming a distinct and compact brain. This represents an early stage in cephalization.
Flatworms
The Platyhelminthes (flatworms) have a more complex nervous system than the Acoela, and are lightly cephalized, for instance having an eyespot above the brain, near the front end.
Complex active bodies
The philosopher Michael Trestman noted that three bilaterian phyla, namely the arthropods, the molluscs in the shape of the cephalopods, and the chordates, were distinctive in having "complex active bodies", something that the acoels and flatworms did not have. Any such animal, whe
Document 2:::
Carcinology is a branch of zoology that consists of the study of crustaceans, a group of arthropods that includes lobsters, crayfish, shrimp, krill, copepods, barnacles and crabs. Other names for carcinology are malacostracology, crustaceology, and crustalogy, and a person who studies crustaceans is a carcinologist or occasionally a malacostracologist, a crustaceologist, or a crustalogist.
The word carcinology derives from Greek , karkínos, "crab"; and , -logia.
Subfields
Carcinology is a subdivision of arthropodology, the study of arthropods which includes arachnids, insects, and myriapods. Carcinology branches off into taxonomically oriented disciplines such as:
astacology – the study of crayfish
cirripedology – the study of barnacles
copepodology – the study of copepods
Journals
Scientific journals devoted to the study of crustaceans include:
Crustaceana
Journal of Crustacean Biology
''Nauplius (journal)
See also
Entomology
Publications in carcinology
List of carcinologists
Document 3:::
Mammalian vision is the process of mammals perceiving light, analyzing it and forming subjective sensations, on the basis of which the animal's idea of the spatial structure of the external world is formed. Responsible for this process in mammals is the visual sensory system, the foundations of which were formed at an early stage in the evolution of chordates. Its peripheral part is formed by the eyes, the intermediate (by the transmission of nerve impulses) - the optic nerves, and the central - the visual centers in the cerebral cortex.
The recognition of visual stimuli in mammals is the result of the joint work of the eyes and the brain. At the same time, a significant part of the visual information is processed already at the receptor level, which allows to significantly reduce the amount of such information received by the brain. Elimination of redundancy in the amount of information is inevitable: if the amount of information delivered to the receptors of the visual system is measured in millions of bits per second (in humans - about 1 bits/s), the capabilities of the nervous system to process it are limited to tens of bits per second.
The organs of vision in mammals are, as a rule, well developed, although in their life they are of less importance than for birds: usually mammals pay little attention to immovable objects, so even cautious animals such as a fox or a hare may come close to a human who stands still without movement. The size of the eyes in mammals is relatively small; in humans, eye weight is 1% of the mass of the head, while in a starling it reaches 15%. Nocturnal animals (for example, tarsiers) and animals that live in open landscapes have larger eyes. The vision of forest animals is not so sharp, and in burrowing underground species (moles, gophers, zokors), eyes are reduced to a greater extent, in some cases (marsupial moles, mole rats, blind mole), they are even covered by a skin membrane.
Mammalian eye
Like other vertebrates, the mammal
Document 4:::
The electric rays are a group of rays, flattened cartilaginous fish with enlarged pectoral fins, composing the order Torpediniformes . They are known for being capable of producing an electric discharge, ranging from 8 to 220 volts, depending on species, used to stun prey and for defense. There are 69 species in four families.
Perhaps the best known members are those of the genus Torpedo. The torpedo undersea weapon is named after it. The name comes from the Latin , 'to be stiffened or paralyzed', from the effect on someone who touches the fish.
Description
Electric rays have a rounded pectoral disc with two moderately large rounded-angular (not pointed or hooked) dorsal fins (reduced in some Narcinidae), and a stout muscular tail with a well-developed caudal fin. The body is thick and flabby, with soft loose skin with no dermal denticles or thorns. A pair of kidney-shaped electric organs are at the base of the pectoral fins. The snout is broad, large in the Narcinidae, but reduced in all other families. The mouth, nostrils, and five pairs of gill slits are underneath the disc.
Electric rays are found from shallow coastal waters down to at least deep. They are sluggish and slow-moving, propelling themselves with their tails, not by using their pectoral fins as other rays do. They feed on invertebrates and small fish. They lie in wait for prey below the sand or other substrate, using their electricity to stun and capture it.
Relationship to humans
History of research
The electrogenic properties of electric rays have been known since antiquity, although their nature was not understood. The ancient Greeks used electric rays to numb the pain of childbirth and operations. In his dialogue Meno, Plato has the character Meno accuse Socrates of "stunning" people with his puzzling questions, in a manner similar to the way the torpedo fish stuns with electricity. Scribonius Largus, a Roman physician, recorded the use of torpedo fish for treatment of headaches and gout
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What can echinoderms sense with their simple eyes?
A. light
B. electricity
C. colors
D. shapes
Answer:
|
|
sciq-8608
|
multiple_choice
|
What is a long, tube-shaped bundle of neurons, protected by the vertebrae?
|
[
"a ganglion",
"a dendrite",
"the spinal cord",
"an axon"
] |
C
|
Relavent Documents:
Document 0:::
A shallow, longitudinal groove separating the developing gray matter into a basal and alar plates along the length of the neural tube. The sulcus limitans extends the length of the spinal cord and through the mesencephalon.
Document 1:::
Catherina Gwynne Becker (née Krüger) is an Alexander von Humboldt Professor at TU Dresden, and was formerly Professor of Neural Development and Regeneration at the University of Edinburgh.
Early life and education
Catherina Becker was born in Marburg, Germany in 1964. She was educated at the in Bremen, before going on to study at the University of Bremen where she obtained an MSci of Biology and her PhD (Dr. rer. nat.) in 1993, investigating visual system development and regeneration in frogs and salamanders under the supervision of Gerhard Roth. She then trained as post-doctorate at the Swiss Federal Institute of Technology in Zürich, the Department Dev Cell Biol funded by an EMBO long-term fellowship, at the University of California, Irvine in USA and the Centre for Molecular Neurobiology Hamburg (ZMNH), Germany where she took a position of group leader in 2000 and finished her ‚Habilitation‘ in neurobiology in 2012.
Career
Becker joined the University of Edinburgh in 2005 as senior Lecturer and was appointed personal chair in neural development and regeneration in 2013. She was also the Director of Postgraduate Training at the Centre for Neuroregeneration up to 2015, then centre director up to 2017. In 2021 she received an Alexander von Humboldt Professorship, joining the at the Technical University of Dresden.
Research
Becker's research focuses on a better understanding of the factors governing the generation of neurons and axonal pathfinding in the CNS during development and regeneration using the zebrafish model to identify fundamental mechanisms in vertebrates with clear translational implications for CNS injury and neurodegenerative diseases.
The Becker group established the zebrafish as a model for spinal cord regeneration.
Their research found that functional regeneration is near perfect, but anatomical repair does not fully recreate the previous network, instead, new neurons are generated and extensive rewiring occurs.
They have identified neurotra
Document 2:::
The brain (or encephalon) is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most specialized and energy-consuming organ in the body, responsible for complex sensory perception, motor control, endocrine regulation and the development of intelligence.
While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates.
In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via root-like protrusions called dendrites and long fiber-like extensions called axons, which are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well devel
Document 3:::
The neural groove is a shallow median groove of the neural plate between the neural folds of an embryo. The neural plate is a thick sheet of ectoderm surrounded on either side by the neural folds, two longitudinal ridges in front of the primitive streak of the developing embryo.
The groove gradually deepens as the neural folds become elevated, and ultimately the folds meet and coalesce in the middle line and convert the groove into a closed tube, the neural tube or canal, the ectodermal wall of which forms the rudiment of the nervous system.
After the coalescence of the neural folds over the anterior end of the primitive streak, the blastopore no longer opens on the surface but into the closed canal of the neural tube, and thus a transitory communication, the neurenteric canal, is established between the neural tube and the primitive digestive tube. The coalescence of the neural folds occurs first in the region of the hind-brain, and from there extends forward and backward; toward the end of the third week the front opening (anterior neuropore) of the tube finally closes at the anterior end of the future brain, and forms a recess which is in contact, for a time, with the overlying ectoderm; the hinder part of the neural groove presents for a time a rhomboidal shape, and to this expanded portion the term sinus rhomboidalis has been applied. Before the neural groove is closed a ridge of ectodermal cells appears along the prominent margin of each neural fold; this is termed the neural crest or ganglion ridge, and from it, the spinal and cranial nerve ganglia and the ganglia of the sympathetic nervous system are developed. By the upward growth of the mesoderm the neural tube is ultimately separated from the overlying ectoderm.
The cephalic end of the neural groove exhibits several dilatations, which, when the tube is closed, assume the form of three vesicles; these constitute the three primary cerebral vesicles and correspond respectively to the future fore-brain (p
Document 4:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a long, tube-shaped bundle of neurons, protected by the vertebrae?
A. a ganglion
B. a dendrite
C. the spinal cord
D. an axon
Answer:
|
|
sciq-5079
|
multiple_choice
|
What might eventually happen to a species if it is unable to reproduce?
|
[
"natural selection",
"migration",
"adaptation",
"extinction"
] |
D
|
Relavent Documents:
Document 0:::
Genetic viability is the ability of the genes present to allow a cell, organism or population to survive and reproduce. The term is generally used to mean the chance or ability of a population to avoid the problems of inbreeding. Less commonly genetic viability can also be used in respect to a single cell or on an individual level.
Inbreeding depletes heterozygosity of the genome, meaning there is a greater chance of identical alleles at a locus. When these alleles are non-beneficial, homozygosity could cause problems for genetic viability. These problems could include effects on the individual fitness (higher mortality, slower growth, more frequent developmental defects, reduced mating ability, lower fecundity, greater susceptibility to disease, lowered ability to withstand stress, reduced intra- and inter-specific competitive ability) or effects on the entire population fitness (depressed population growth rate, reduced regrowth ability, reduced ability to adapt to environmental change). See Inbreeding depression. When a population of plants or animals loses their genetic viability, their chance of going extinct increases.
Necessary conditions
To be genetically viable, a population of plants or animals requires a certain amount of genetic diversity and a certain population size. For long-term genetic viability, the population size should consist of enough breeding pairs to maintain genetic diversity. The precise effective population size can be calculated using a minimum viable population analysis. Higher genetic diversity and a larger population size will decrease the negative effects of genetic drift and inbreeding in a population. When adequate measures have been met, the genetic viability of a population will increase.
Causes for decrease
The main cause of a decrease in genetic viability is loss of habitat. This loss can occur because of, for example urbanization or deforestation causing habitat fragmentation. Natural events like earthquakes, floods
Document 1:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 2:::
A behaviour mutation is a genetic mutation that alters genes that control the way in which an organism behaves, causing their behavioural patterns to change.
A mutation is a change or error in the genomic sequence of a cell. It can occur during meiosis or replication of DNA, as well as due to ionizing or UV radiation, transposons, mutagenic chemicals, viruses and a number of other factors. Mutations usually (but not always) result in a change in an organism's fitness. These changes are largely deleterious, having a negative effect on fitness; however, they can also be neutral and even advantageous.
It is theorized that these mutations, along with genetic recombination, are the raw material upon which natural selection can act to form evolutionary processes. This is due to selection's tendency to "pick and choose" mutations which are advantageous and pass them on to an organism's offspring, while discarding deleterious mutations. In asexual lineages, these mutations will always be passed on, causing them to become a crucial factor in whether the lineage will survive or go extinct.
One way that mutations manifest themselves is behaviour mutation. Some examples of this could be variations in mating patterns, increasingly aggressive or passive demeanor, how an individual learns and the way an individual interacts and coordinates with others.
Behaviour mutations have important implications on the nature of the evolution of animal behaviour. They can help us understand how different forms of behaviour evolve, especially behaviour which can seem strange or out of place. In other cases, they can help us understand how important patterns of behaviour were able to arise – on the back of a simple gene mutation. Finally, they can help provide key insight on the nature of speciation events which can occur when a behaviour mutation changes the courtship methods and manner of mating in sexually reproducing species.
History
Ethology, the study of animal behaviour, has been a
Document 3:::
Error catastrophe refers to the cumulative loss of genetic information in a lineage of organisms due to high mutation rates. The mutation rate above which error catastrophe occurs is called the error threshold. Both terms were coined by Manfred Eigen in his mathematical evolutionary theory of the quasispecies.
The term is most widely used to refer to mutation accumulation to the point of inviability of the organism or virus, where it cannot produce enough viable offspring to maintain a population. This use of Eigen's term was adopted by Lawrence Loeb and colleagues to describe the strategy of lethal mutagenesis to cure HIV by using mutagenic ribonucleoside analogs.
There was an earlier use of the term introduced in 1963 by Leslie Orgel in a theory for cellular aging, in which errors in the translation of proteins involved in protein translation would amplify the errors until the cell was inviable. This theory has not received empirical support.
Error catastrophe is predicted in certain mathematical models of evolution and has also been observed empirically.
Like every organism, viruses 'make mistakes' (or mutate) during replication. The resulting mutations increase biodiversity among the population and help subvert the ability of a host's immune system to recognise it in a subsequent infection. The more mutations the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on biodiversity for an explanation of the selective advantages of this). However, if it makes too many mutations, it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all.
The question arises: how many mutations can be made during each replication before the population of viruses begins to lose self-identity?
Basic mathematical model
Consider a virus which has a genetic identity modeled by a string of ones and zeros (e.g.
Document 4:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What might eventually happen to a species if it is unable to reproduce?
A. natural selection
B. migration
C. adaptation
D. extinction
Answer:
|
|
ai2_arc-32
|
multiple_choice
|
Which of these human activities does not contribute to the extinction of species?
|
[
"hunting",
"habitat destruction",
"restoration ecology",
"introduced nonnative species"
] |
C
|
Relavent Documents:
Document 0:::
Biodiversity loss includes the worldwide extinction of different species, as well as the local reduction or loss of species in a certain habitat, resulting in a loss of biological diversity. The latter phenomenon can be temporary or permanent, depending on whether the environmental degradation that leads to the loss is reversible through ecological restoration/ecological resilience or effectively permanent (e.g. through land loss). The current global extinction (frequently called the sixth mass extinction or Anthropocene extinction), has resulted in a biodiversity crisis being driven by human activities which push beyond the planetary boundaries and so far has proven irreversible.
The main direct threats to conservation (and thus causes for biodiversity loss) fall in eleven categories: Residential and commercial development; farming activities; energy production and mining; transportation and service corridors; biological resource usages; human intrusions and activities that alter, destroy, disturb habitats and species from exhibiting natural behaviors; natural system modification; invasive and problematic species, pathogens and genes; pollution; catastrophic geological events, climate change, and so on.
Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However other scientists have criticized this, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption, due to country wealth disparities.
Climate change is another threat to global biodiversity. For example, coral reefs – which are biodiversity hotspots – will be lost within the century if global warming continues at the current rate. However, habitat destruction e.g. for the expansion of agriculture, is currently the more significant driver of contemporary biodiversity lo
Document 1:::
Ian Gordon Simmons (born 22 January 1937) is a British geographer. He retired as Professor of Geography from the University of Durham in 2001. He has made significant contributions to environmental history and prehistoric archaeology.
Background
Simmons grew up in East London and then East Lincolnshire until the age of 12. He studied physical geography (BSc) and holds a PhD from the University of London (early 1960s) on the vegetation history of Dartmoor. He began university lecturing in his early 20s and was Lecturer and then Reader in Geography at the University of Durham from 1962 to 1977, then Professor of Geography at the University of Bristol from 1977 to 1981 before returning to a Chair in Geography at Durham, where he worked until retiring in 2001.
In 1972–73, he taught biogeography for a year at York University, Canada and has held other appointments including Visiting Scholar, St. John's College, University of Oxford in the 1990s. Previously, he had been an ACLS postdoctoral fellow at the University of California, Berkeley.
Scholarship
His research includes the study of the later Mesolithic and early Neolithic in their environmental setting on English uplands, where he has demonstrated the role of these early human communities in initiating some of Britain's characteristic landscape elements. His work also encompasses the long-term effects of human manipulation of the natural environment and its consequences for resource use and environmental change. This line of work resulted in his last three books, which looked at environmental history on three nested scales: the moorlands of England and Wales, Great Britain, and the Globe. Each dealt with the last 10,000 years and tried to encompassboth conventional science-based data with the insights of the social sciences and humanities.
Simmons has authored several books on environmental thought and culture over the ages as well as contemporary resource management and environmental problems. Since retireme
Document 2:::
Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons.
Overview
Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it.
Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years.
Measurement
Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct.
Lifespan estimates
Some species lifespan es
Document 3:::
The Sixth Extinction: An Unnatural History is a 2014 non-fiction book written by Elizabeth Kolbert and published by Henry Holt and Company. The book argues that the Earth is in the midst of a modern, man-made, sixth extinction. In the book, Kolbert chronicles previous mass extinction events, and compares them to the accelerated, widespread extinctions during our present time. She also describes specific species extinguished by humans, as well as the ecologies surrounding prehistoric and near-present extinction events. The author received the Pulitzer Prize for General Non-Fiction for the book in 2015.
The target audience is the general reader, and scientific descriptions are rendered in understandable prose. The writing blends explanations of her treks to remote areas with interviews of scientists, researchers, and guides, without advocating a position, in pursuit of objectivity. Hence, the sixth mass extinction theme is applied to flora and fauna existing in diverse habitats, such as the Panamanian rainforest, the Great Barrier Reef, the Andes, Bikini Atoll, city zoos, and the author's own backyard. The book also applies this theme to a number of other habitats and organisms throughout the world. After researching the current mainstream view of the relevant peer-reviewed science, Kolbert estimates flora and fauna loss by the end of the 21st century to be between 20 and 50 percent "of all living species on earth".
Anthropocene
Kolbert equates current, general unawareness of this issue to previous widespread disbelief of it during the centuries preceding the late 1700s; at that time, it was believed that prehistoric mass extinctions had never occurred. It was also believed there were no natural forces powerful enough to extinguish species en masse. Likewise, in our own time, the possible finality presented by this issue results in denialism. But scientific studies have shown that human behavior disrupts Earth's balanced and interconnected systems, "putting our own
Document 4:::
Defaunation is the global, local, or functional extinction of animal populations or species from ecological communities. The growth of the human population, combined with advances in harvesting technologies, has led to more intense and efficient exploitation of the environment. This has resulted in the depletion of large vertebrates from ecological communities, creating what has been termed "empty forest". Defaunation differs from extinction; it includes both the disappearance of species and declines in abundance. Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of Neotropical forests. Since then, the term has gained broader usage in conservation biology as a global phenomenon.
It is estimated that more than 50 percent of all wildlife has been lost in the last 40 years. In 2016, it was estimated that by 2020, 68% of the world's wildlife would be lost. In South America, there is believed to be a 70 percent loss. A 2021 study found that only around 3% of the planet's terrestrial surface is ecologically and faunally intact, with healthy populations of native animal species and little to no human footprint.
In November 2017, over 15,000 scientists around the world issued a second warning to humanity, which, among other things, urged for the development and implementation of policies to halt "defaunation, the poaching crisis, and the exploitation and trade of threatened species."
Drivers
Overexploitation
The intensive hunting and harvesting of animals threatens endangered vertebrate species across the world. Game vertebrates are considered valuable products of tropical forests and savannas. In Brazilian Amazonia, 23 million vertebrates are killed every year; large-bodied primates, tapirs, white-lipped peccaries, giant armadillos, and tortoises are some of the animals most sensitive to harvest. Overhunting can reduce the local population of such species by more than half
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of these human activities does not contribute to the extinction of species?
A. hunting
B. habitat destruction
C. restoration ecology
D. introduced nonnative species
Answer:
|
|
sciq-9293
|
multiple_choice
|
What does rising air do when it reaches the top of the troposphere?
|
[
"dries",
"heats",
"warms",
"cools"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does rising air do when it reaches the top of the troposphere?
A. dries
B. heats
C. warms
D. cools
Answer:
|
|
sciq-9014
|
multiple_choice
|
The term “vapor” refers to the gas phase when it exists at a temperature below what?
|
[
"boiling temperature",
"contaminated temperature",
"freezing temperature",
"liquid temperature"
] |
A
|
Relavent Documents:
Document 0:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Document 1:::
Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization.
There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes.
Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria.
Boiling water is also used in several cooking methods including boiling, steaming, and poaching.
Types
Free convection
The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point.
Nucleate
Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature.
An irregular surface of the boiling
Document 2:::
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor.
The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures.
The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar.
The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure).
Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid.
Saturation temperature and pressure
A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing).
Saturation te
Document 3:::
In chemistry, volatility is a material quality which describes how readily a substance vaporizes. At a given temperature and pressure, a substance with high volatility is more likely to exist as a vapour, while a substance with low volatility is more likely to be a liquid or solid. Volatility can also describe the tendency of a vapor to condense into a liquid or solid; less volatile substances will more readily condense from a vapor than highly volatile ones. Differences in volatility can be observed by comparing how fast substances within a group evaporate (or sublimate in the case of solids) when exposed to the atmosphere. A highly volatile substance such as rubbing alcohol (isopropyl alcohol) will quickly evaporate, while a substance with low volatility such as vegetable oil will remain condensed. In general, solids are much less volatile than liquids, but there are some exceptions. Solids that sublimate (change directly from solid to vapor) such as dry ice (solid carbon dioxide) or iodine can vaporize at a similar rate as some liquids under standard conditions.
Description
Volatility itself has no defined numerical value, but it is often described using vapor pressures or boiling points (for liquids). High vapor pressures indicate a high volatility, while high boiling points indicate low volatility. Vapor pressures and boiling points are often presented in tables and charts that can be used to compare chemicals of interest. Volatility data is typically found through experimentation over a range of temperatures and pressures.
Vapor pressure
Vapor pressure is a measurement of how readily a condensed phase forms a vapor at a given temperature. A substance enclosed in a sealed vessel initially at vacuum (no air inside) will quickly fill any empty space with vapor. After the system reaches equilibrium and the rate of evaporation matches the rate of condensation, the vapor pressure can be measured. Increasing the temperature increases the amount of vapor that is f
Document 4:::
In thermodynamics, vapor quality is the mass fraction in a saturated mixture that is vapor; in other words, saturated vapor has a "quality" of 100%, and saturated liquid has a "quality" of 0%. Vapor quality is an intensive property which can be used in conjunction with other independent intensive properties to specify the thermodynamic state of the working fluid of a thermodynamic system. It has no meaning for substances which are not saturated mixtures (for example, compressed liquids or superheated fluids).
Vapor quality is an important quantity during the adiabatic expansion step in various thermodynamic cycles (like Organic Rankine cycle, Rankine cycle, etc.). Working fluids can be classified by using the appearance of droplets in the vapor during the expansion step.
Quality can be calculated by dividing the mass of the vapor by the mass of the total mixture:
where indicates mass.
Another definition used in chemical engineering defines quality () of a fluid as the fraction that is saturated liquid. By this definition, a saturated liquid has . A saturated vapor has .
An alternative definition is the 'equilibrium thermodynamic quality'. It can be used only for single-component mixtures (e.g. water with steam), and can take values < 0 (for sub-cooled fluids) and > 1 (for super-heated vapors):
where is the mixture specific enthalpy, defined as:
Subscripts and refer to saturated liquid and saturated gas respectively, and refers to vaporization.
Calculation
The above expression for vapor quality can be expressed as:
where is equal to either specific enthalpy, specific entropy, specific volume or specific internal energy, is the value of the specific property of saturated liquid state and is the value of the specific property of the substance in dome zone, which we can find both liquid and vapor .
Another expression of the same concept is:
where is the vapor mass and is the liquid mass.
Steam quality and work
The origin of the idea of vapor qua
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The term “vapor” refers to the gas phase when it exists at a temperature below what?
A. boiling temperature
B. contaminated temperature
C. freezing temperature
D. liquid temperature
Answer:
|
|
scienceQA-8331
|
multiple_choice
|
What do these two changes have in common?
melting glass
grilling a hamburger
|
[
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by cooling.",
"Both are caused by heating."
] |
D
|
Step 1: Think about each change.
Melting glass is a change of state. So, it is a physical change. The glass changes from solid to liquid. But a different type of matter is not formed.
Grilling a hamburger is a chemical change. Heat from the grill causes the matter in the meat to change. Cooked meat and raw meat are different types of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Melting glass is a physical change. But grilling a hamburger is not.
Both are chemical changes.
Grilling a hamburger is a chemical change. But melting glass is not.
Both are caused by heating.
Both changes are caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Perfect thermal contact of the surface of a solid with the environment (convective heat transfer) or another solid occurs when the temperatures of the mating surfaces are equal.
Perfect thermal contact conditions
Perfect thermal contact supposes that on the boundary surface there holds an equality of the temperatures
and an equality of heat fluxes
where are temperatures of the solid and environment (or mating solid), respectively; are thermal conductivity coefficients of the solid and mating laminar layer (or solid), respectively; is normal to the surface .
If there is a heat source on the boundary surface , e.g. caused by sliding friction, the latter equality transforms in the following manner
where is heat-generation rate per unit area.
Document 4:::
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
melting glass
grilling a hamburger
A. Both are only physical changes.
B. Both are chemical changes.
C. Both are caused by cooling.
D. Both are caused by heating.
Answer:
|
sciq-9142
|
multiple_choice
|
What is the process by which plants capture the energy of sunlight and use carbon dioxide from the air (and water) to make their own food called?
|
[
"spermatogenesis",
"atherosclerosis",
"photochemistry",
"photosynthesis"
] |
D
|
Relavent Documents:
Document 0:::
{{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete
Document 1:::
Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym.
Plants
Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis.
In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions.
Light
Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum
Document 2:::
In developmental biology, photomorphogenesis is light-mediated development, where plant growth patterns respond to the light spectrum. This is a completely separate process from photosynthesis where light is used as a source of energy. Phytochromes, cryptochromes, and phototropins are photochromic sensory receptors that restrict the photomorphogenic effect of light to the UV-A, UV-B, blue, and red portions of the electromagnetic spectrum.
The photomorphogenesis of plants is often studied by using tightly frequency-controlled light sources to grow the plants. There are at least three stages of plant development where photomorphogenesis occurs: seed germination, seedling development, and the switch from the vegetative to the flowering stage (photoperiodism).
Most research on photomorphogenesis is derived from plants studies involving several kingdoms: Fungi, Monera, Protista, and Plantae.
History
Theophrastus of Eresus (371 to 287 BC) may have been the first to write about photomorphogenesis. He described the different wood qualities of fir trees grown in different levels of light, likely the result of the photomorphogenic "shade-avoidance" effect. In 1686, John Ray wrote "Historia Plantarum" which mentioned the effects of etiolation (grow in the absence of light). Charles Bonnet introduced the term "etiolement" to the scientific literature in 1754 when describing his experiments, commenting that the term was already in use by gardeners.
Developmental stages affected
Seed germination
Light has profound effects on the development of plants. The most striking effects of light are observed when a germinating seedling emerges from the soil and is exposed to light for the first time.
Normally the seedling radicle (root) emerges first from the seed, and the shoot appears as the root becomes established. Later, with growth of the shoot (particularly when it emerges into the light) there is increased secondary root formation and branching. In this coordinated progressi
Document 3:::
Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle.
How photosynthesis systems function
Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of induces more atmospheric to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of and water vapor that is measured as a proxy of photosynthetic rate.
The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the and water vapour by passage over soda lime and Drierite, then add at a controlled rate to give a stable concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured.
The leaf to be analysed is placed in the leaf chamber. The concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the concentration. Modern IRGAs take account of the fact that absorbs energy at similar wavelengths as . Modern IRG
Document 4:::
Crassulacean acid metabolism, also known as CAM photosynthesis, is a carbon fixation pathway that evolved in some plants as an adaptation to arid conditions that allows a plant to photosynthesize during the day, but only exchange gases at night. In a plant using full CAM, the stomata in the leaves remain shut during the day to reduce evapotranspiration, but they open at night to collect carbon dioxide () and allow it to diffuse into the mesophyll cells. The is stored as four-carbon malic acid in vacuoles at night, and then in the daytime, the malate is transported to chloroplasts where it is converted back to , which is then used during photosynthesis. The pre-collected is concentrated around the enzyme RuBisCO, increasing photosynthetic efficiency. This mechanism of acid metabolism was first discovered in plants of the family Crassulaceae.
Historical background
Observations relating to CAM were first made by de Saussure in 1804 in his Recherches Chimiques sur la Végétation. Benjamin Heyne in 1812 noted that Bryophyllum leaves in India were acidic in the morning and tasteless by afternoon. These observations were studied further and refined by Aubert, E. in 1892 in his Recherches physiologiques sur les plantes grasses and expounded upon by Richards, H. M. 1915 in Acidity and Gas Interchange in Cacti, Carnegie Institution. The term CAM may have been coined by Ranson and Thomas in 1940, but they were not the first to discover this cycle. It was observed by the botanists Ranson and Thomas, in the succulent family Crassulaceae (which includes jade plants and Sedum). The name "Crassulacean acid metabolism" refers to acid metabolism in Crassulaceae, and not the metabolism of "crassulacean acid"; there is no chemical by that name.
Overview: a two-part cycle
CAM is an adaptation for increased efficiency in the use of water, and so is typically found in plants growing in arid conditions. (CAM is found in over 99% of the known 1700 species of Cactaceae and in nearly all
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process by which plants capture the energy of sunlight and use carbon dioxide from the air (and water) to make their own food called?
A. spermatogenesis
B. atherosclerosis
C. photochemistry
D. photosynthesis
Answer:
|
|
sciq-726
|
multiple_choice
|
What do red blood cells carry?
|
[
"nitrogen",
"carbon dioxide",
"oxygen",
"hydrogen"
] |
C
|
Relavent Documents:
Document 0:::
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells.
Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma.
Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated.
Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen.
Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasite
Document 1:::
The red pulp of the spleen is composed of connective tissue known also as the cords of Billroth and many splenic sinusoids that are engorged with blood, giving it a red color. Its primary function is to filter the blood of antigens, microorganisms, and defective or worn-out red blood cells.
The spleen is made of red pulp and white pulp, separated by the marginal zone; 76-79% of a normal spleen is red pulp. Unlike white pulp, which mainly contains lymphocytes such as T cells, red pulp is made up of several different types of blood cells, including platelets, granulocytes, red blood cells, and plasma.
The red pulp also acts as a large reservoir for monocytes. These monocytes are found in clusters in the Billroth's cords (red pulp cords). The population of monocytes in this reservoir is greater than the total number of monocytes present in circulation. They can be rapidly mobilised to leave the spleen and assist in tackling ongoing infections.
Sinusoids
The splenic sinusoids, are wide vessels that drain into pulp veins which themselves drain into trabecular veins. Gaps in the endothelium lining the sinusoids mechanically filter blood cells as they enter the spleen. Worn-out or abnormal red cells attempting to squeeze through the narrow intercellular spaces become badly damaged, and are subsequently devoured by macrophages in the red pulp. In addition to clearing aged red blood cells, the sinusoids also filter out cellular debris, particles that could clutter up the bloodstream.
Cells found in red pulp
Red pulp consists of a dense network of fine reticular fiber, continuous with those of the splenic trabeculae, to which are applied flat, branching cells. The meshes of the reticulum are filled with blood:
White blood cells are found to be in larger proportion than they are in ordinary blood.
Large rounded cells, termed splenic cells, are also seen; these are capable of ameboid movement, and often contain pigment and red-blood corpuscles in their interior.
The cell
Document 2:::
A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
Document 3:::
Hemogenic endothelium is a special subset of endothelial cells scattered within blood vessels that can differentiate into haematopoietic cells.
The development of hematopoietic cells in the embryo proceeds sequentially from mesoderm through the hemangioblast to the hemogenic endothelium and hematopoietic progenitors.
See also
Hemangioblast
Document 4:::
Reticulocytosis is a condition where there is an increase in reticulocytes, immature red blood cells.
It is commonly seen in anemia. They are seen on blood films when the bone marrow is highly active in an attempt to replace red blood cell loss such as in haemolytic anaemia or haemorrhage.
External links
Histology
Abnormal clinical and laboratory findings for RBCs
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do red blood cells carry?
A. nitrogen
B. carbon dioxide
C. oxygen
D. hydrogen
Answer:
|
|
scienceQA-10049
|
multiple_choice
|
Select the reptile.
|
[
"mandarinfish",
"bull shark",
"leaf-tailed gecko",
"eastern newt"
] |
C
|
A leaf-tailed gecko is a reptile. It has scaly, waterproof skin.
Many geckos have special pads on their toes. The pads help them climb up plants and rocks.
A bull shark is a fish. It lives underwater. It has fins, not limbs.
Bull sharks can live in both fresh and salt water. They are found in rivers and in shallow parts of the ocean.
A mandarinfish is a fish. It lives underwater. It has fins, not limbs.
Mandarinfish often live near coral reefs. They eat small worms, snails, and fish eggs.
An eastern newt is an amphibian. It has moist skin and begins its life in water.
Some newts live in water. Other newts live on land but lay their eggs in water.
|
Relavent Documents:
Document 0:::
The Reptile Database is a scientific database that collects taxonomic information on all living reptile species (i.e. no fossil species such as dinosaurs). The database focuses on species (as opposed to higher ranks such as families) and has entries for all currently recognized ~13,000 species and their subspecies, although there is usually a lag time of up to a few months before newly described species become available online. The database collects scientific and common names, synonyms, literature references, distribution information, type information, etymology, and other taxonomically relevant information.
History
The database was founded in 1995 as EMBL Reptile Database when the founder, Peter Uetz, was a graduate student at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. Thure Etzold had developed the first web interface for the EMBL DNA sequence database which was also used as interface for the Reptile Database. In 2006 the database moved to The Institute of Genomic Research (TIGR) and briefly operated as TIGR Reptile Database until TIGR was merged into the J Craig Venter Institute (JCVI) where Uetz was an associate professor until 2010. Since 2010 the database has been maintained on servers in the Czech Republic under the supervision of Peter Uetz and Jirí Hošek, a Czech programmer. The database celebrated its 25th anniversary together with AmphibiaWeb which had its 20th anniversary in 2021.
Content
As of September 2020, the Reptile Database lists about 11,300 species (including another ~2,200 subspecies) in about 1200 genera (see figure), and has more than 50,000 literature references and about 15,000 photos. The database has constantly grown since its inception with an average of 100 to 200 new species described per year over the preceding decade. Recently, the database also added a more or less complete list of primary type specimens.
Relationship to other databases
The Reptile Database has been a member of the Species 2000 pro
Document 1:::
Iguania is an infraorder of squamate reptiles that includes iguanas, chameleons, agamids, and New World lizards like anoles and phrynosomatids. Using morphological features as a guide to evolutionary relationships, the Iguania are believed to form the sister group to the remainder of the Squamata, which comprise nearly 11,000 named species, roughly 2000 of which are iguanians. However, molecular information has placed Iguania well within the Squamata as sister taxa to the Anguimorpha and closely related to snakes. The order has been under debate and revisions after being classified by Charles Lewis Camp in 1923 due to difficulties finding adequate synapomorphic morphological characteristics. Most Iguanias are arboreal but there are several terrestrial groups. They usually have primitive fleshy, non-prehensile tongues, although the tongue is highly modified in chameleons. The group has a fossil record that extends back to the Early Jurassic (the oldest known member is Bharatagama, which lived about 190 million years ago in what is now India). Today they are scattered occurring in Madagascar, the Fiji and Friendly Islands and Western Hemisphere.
Classification
The Iguania currently include these extant families:
Clade Acrodonta
Family Agamidae – agamid lizards, Old World arboreal lizards
Family Chamaeleonidae – chameleons
Clade Pleurodonta – American arboreal lizards, chuckwallas, iguanas
Family Leiocephalidae
Genus Leiocephalus: curly-tailed lizards
Family Corytophanidae – helmet lizards
Family Crotaphytidae – collared lizards, leopard lizards
Family Hoplocercidae – dwarf and spinytail iguanas
Family Iguanidae – marine, Fijian, Galapagos land, spinytail, rock, desert, green, and chuckwalla iguanas
Family Tropiduridae – tropidurine lizards
subclade of Tropiduridae Tropidurini – neotropical ground lizards
Family Dactyloidae – anoles
Family Polychrotidae
subclade of Polychrotidae Polychrus
Family Phrynosomatidae – North American spiny lizards
Family Liolaem
Document 2:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 3:::
History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC.
Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal.
The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers.
The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject.
Context
Aristotle (384–322 BC) studied at Plat
Document 4:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the reptile.
A. mandarinfish
B. bull shark
C. leaf-tailed gecko
D. eastern newt
Answer:
|
sciq-5287
|
multiple_choice
|
Which process changes rocks by heat and pressure?
|
[
"weathering",
"metamorphism",
"sediments",
"Changes"
] |
B
|
Relavent Documents:
Document 0:::
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle.
Transition to igneous rock
When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks.
Secondary changes
Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
Document 1:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 2:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 3:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 4:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which process changes rocks by heat and pressure?
A. weathering
B. metamorphism
C. sediments
D. Changes
Answer:
|
|
sciq-5112
|
multiple_choice
|
Light bends when it passes in from what to what?
|
[
"air to ground",
"product to water",
"water to air",
"air to water"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Light bends when it passes in from what to what?
A. air to ground
B. product to water
C. water to air
D. air to water
Answer:
|
|
sciq-1334
|
multiple_choice
|
What part of the ear is often described as a bony labyrinth?
|
[
"inner ear",
"solid ear",
"embedded ear",
"outer ear"
] |
A
|
Relavent Documents:
Document 0:::
The vestibule is the central part of the bony labyrinth in the inner ear, and is situated medial to the eardrum, behind the cochlea, and in front of the three semicircular canals.
The name comes from the Latin , literally an entrance hall.
Structure
The vestibule is somewhat oval in shape, but flattened transversely; it measures about 5 mm from front to back, the same from top to bottom, and about 3 mm across.
In its lateral or tympanic wall is the oval window, closed, in the fresh state, by the base of the stapes and annular ligament.
On its medial wall, at the forepart, is a small circular depression, the recessus sphæricus, which is perforated, at its anterior and inferior part, by several minute holes (macula cribrosa media) for the passage of filaments of the acoustic nerve to the saccule; and behind this depression is an oblique ridge, the crista vestibuli, the anterior end of which is named the pyramid of the vestibule.
This ridge bifurcates below to enclose a small depression, the fossa cochlearis, which is perforated by a number of holes for the passage of filaments of the acoustic nerve which supply the vestibular end of the cochlear duct.
The orifice of the vestibular aqueduct is the hind part of the medial wall; it extends to the posterior surface of the petrous portion of the temporal bone.
It transmits a small vein and contains a tubular prolongation of the membranous labyrinth, the endolymphatic duct, which ends in a cul-de-sac between the layers of the dura mater within the cranial cavity.
On the upper wall or roof, there is a transversely oval depression, the recessus ellipticus, separated from the recessus sphæricus by the crista vestibuli already mentioned.
The pyramid and adjoining part of the recessus ellipticus are perforated by a number of holes (macula cribrosa superior).
The apertures in the pyramid transmit the nerves to the utricle; those in the recessus ellipticus are the nerves to the ampullæ of the superior and lateral semicir
Document 1:::
The membranous labyrinth is a collection of fluid filled tubes and chambers which contain the receptors for the senses of equilibrium and hearing. It is lodged within the bony labyrinth in the inner ear and has the same general form; it is, however, considerably smaller and is partly separated from the bony walls by a quantity of fluid, the perilymph.
In certain places, it is fixed to the walls of the cavity.
The membranous labyrinth contains fluid called endolymph. The walls of the membranous labyrinth are lined with distributions of the cochlear nerve, one of the two branches of the vestibulocochlear nerve. The other branch is the vestibular nerve.
Within the vestibule, the membranous labyrinth does not quite preserve the form of the bony labyrinth, but consists of two membranous sacs, the utricle, and the saccule.
The membranous labyrinth is also the location for the receptor cells found in the inner ear.
Document 2:::
The tympanic cavity is a small cavity surrounding the bones of the middle ear. Within it sit the ossicles, three small bones that transmit vibrations used in the detection of sound.
Structure
On its lateral surface, it abuts the external auditory meatus [ ear canal ] from which it is separated by the tympanic membrane (eardrum).
Walls
The tympanic cavity is bounded by:
Facing the inner ear, the medial wall (or labyrinthic wall, labyrinthine wall) is vertical, and has the oval window and round window, the promontory, and the prominence of the facial canal.
Facing the outer ear, the lateral wall (or membranous wall), is formed mainly by the tympanic membrane, partly by the ring of bone into which this membrane is inserted. This ring of bone is incomplete at its upper part, forming a notch (notch of Rivinus), close to which are three small apertures: the "iter chordæ posterius", the petrotympanic fissure, and the "iter chordæ anterius". The iter chordæ posterius (apertura tympanica canaliculi chordæ) is situated in the angle of junction between the mastoid and membranous wall of tympanic cavity immediately behind the tympanic membrane and on a level with the upper end of the manubrium of the malleus; it leads into a minute canal, which descends in front of the canal for the facial nerve, and ends in that canal near the stylo-mastoid foramen. Through it the chorda tympani nerve enters the tympanic cavity. The petrotympanic fissure opens just above and in front of the ring of bone into which the tympanic membrane is inserted; in this situation it is a mere slit about 2 mm. in length. It lodges the anterior process and anterior ligament of the malleus, and gives passage to the anterior tympanic branch of the internal maxillary artery. The iter chordæ anterius (canal of Huguier) is placed at the medial end of the petrotympanic fissure; through it the chorda tympani nerve leaves the tympanic cavity.
The roof of the cavity (also called the tegmental wall, tegmental roof
Document 3:::
The bony labyrinth (also osseous labyrinth or otic capsule) is the rigid, bony outer wall of the inner ear in the temporal bone. It consists of three parts: the vestibule, semicircular canals, and cochlea. These are cavities hollowed out of the substance of the bone, and lined by periosteum. They contain a clear fluid, the perilymph, in which the membranous labyrinth is situated.
A fracture classification system in which temporal bone fractures detected by computed tomography are delineated based on disruption of the otic capsule has been found to be predictive for complications of temporal bone trauma such as facial nerve injury, sensorineural deafness and cerebrospinal fluid otorrhea. On radiographic images, the otic capsule is the densest portion of the temporal bone.
In otospongiosis, a leading cause of adult-onset hearing loss, the otic capsule is exclusively affected. This area normally undergoes no remodeling in adult life and is extremely dense. With otospongiosis, the normally dense enchondral bone is replaced by haversian bone, a spongy and vascular matrix that results in sensorineural hearing loss due to compromise of the conductive capacity of the inner ear ossicles. This results in hypodensity on CT, with the portion first affected usually being the fissula ante fenestram.
The bony labyrinth is studied in paleoanthropology as it is a good indicator for distinguishing Neanderthals and Modern humans.
Document 4:::
In the anatomy of humans and various other tetrapods, the eardrum, also called the tympanic membrane or myringa, is a thin, cone-shaped membrane that separates the external ear from the middle ear. Its function is to transmit sound from the air to the ossicles inside the middle ear, and then to the oval window in the fluid-filled cochlea. Hence, it ultimately converts and amplifies vibration in the air to vibration in cochlear fluid. The malleus bone bridges the gap between the eardrum and the other ossicles.
Rupture or perforation of the eardrum can lead to conductive hearing loss. Collapse or retraction of the eardrum can cause conductive hearing loss or cholesteatoma.
Structure
Orientation and relations
The tympanic membrane is oriented obliquely in the anteroposterior, mediolateral, and superoinferior planes. Consequently, its superoposterior end lies lateral to its anteroinferior end.
Anatomically, it relates superiorly to the middle cranial fossa, posteriorly to the ossicles and facial nerve, inferiorly to the parotid gland, and anteriorly to the temporomandibular joint.
Regions
The eardrum is divided into two general regions: the pars flaccida and the pars tensa.
The relatively fragile pars flaccida lies above the lateral process of the malleus between the notch of Rivinus and the anterior and posterior malleal folds. Consisting of two layers and appearing slightly pinkish in hue, it is associated with Eustachian tube dysfunction and cholesteatomas.
The larger pars tensa consists of three layers: skin, fibrous tissue, and mucosa. Its thick periphery forms a fibrocartilaginous ring called the annulus tympanicus or Gerlach's ligament. while the central umbo tents inward at the level of the tip of malleus. The middle fibrous layer, containing radial, circular, and parabolic fibers, encloses the handle of malleus. Though comparatively robust, the pars tensa is the region more commonly associated with perforations.
Umbo
The manubrium () of the malleus is f
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What part of the ear is often described as a bony labyrinth?
A. inner ear
B. solid ear
C. embedded ear
D. outer ear
Answer:
|
|
sciq-4699
|
multiple_choice
|
Group 16 is called what?
|
[
"the acid group",
"noble gases",
"the oxygen group",
"metalloids"
] |
C
|
Relavent Documents:
Document 0:::
Sometimes the Tits group is considered a 17th non-strict simple group of Lie type, or a 27th sporadic group, which would yield a total of 45 finite simple groups.
In science
The atomic number of ruthenium
Astronomy
Messier object M44, a magnitude 4.0 open cluster in the constellation Cancer, also known as the Beehive Cluster
The New General Catalogue object NGC 44, a doubl
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
In mathematics, especially abstract algebra, loop theory and quasigroup theory are active research areas with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Many of the problems posed here first appeared in the Loops (Prague) conferences and the Mile High (Denver) conferences.
Open problems (Moufang loops)
Abelian by cyclic groups resulting in Moufang loops
Let L be a Moufang loop with normal abelian subgroup (associative subloop) M of odd order such that L/M is a cyclic group of order bigger than 3. (i) Is L a group? (ii) If the orders of M and L/M are relatively prime, is L a group?
Proposed: by Michael Kinyon, based on (Chein and Rajah, 2000)
Comments: The assumption that L/M has order bigger than 3 is important, as there is a (commutative) Moufang loop L of order 81 with normal commutative subgroup of order 27.
Embedding CMLs of period 3 into alternative algebras
Conjecture: Any finite commutative Moufang loop of period 3 can be embedded into a commutative alternative algebra.
Proposed: by Alexander Grishkov at Loops '03, Prague 2003
Frattini subloop for Moufang loops
Conjecture: Let L be a finite Moufang loop and Φ(L) the intersection of all maximal subloops of L. Then Φ(L) is a normal nilpotent subloop of L.
Proposed: by Alexander Grishkov at Loops '11, Třešť 2011
Minimal presentations for loops M(G,2)
For a group , define on x by
, , , . Find a minimal presentation for the Moufang loop with respect to a presentation for .
Proposed: by Petr Vojtěchovský at Loops '03, Prague 2003
Comments: Chein showed in (Chein, 1974) that is a Moufang loop that is nonassociative if and only if is nonabelian. Vojtěchovský (Vojtěchovský, 2003) found a minimal presentation for when is a 2-generated group.
Moufang loops of order p2q3 and pq4
Let p and q be distinct odd primes. If q is not congruent to 1 modulo p, are all Moufang loops of order p2q3 groups? What about pq4?
Prop
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Group 16 is called what?
A. the acid group
B. noble gases
C. the oxygen group
D. metalloids
Answer:
|
|
sciq-2064
|
multiple_choice
|
Most cases of syphilis can be cured with what?
|
[
"antibiotics",
"enzymes",
"abstinence",
"vitamins"
] |
A
|
Relavent Documents:
Document 0:::
Syphilis is a bacterial infection transmitted by sexual contact and is believed to have infected people in 1999 with greater than 90% of cases in the developing world. It affects between 700,000 and 1.6 million pregnancies a year, resulting in spontaneous abortions, stillbirths, and congenital syphilis. In Sub-Saharan Africa syphilis contributes to approximately 20% of perinatal deaths.
In the developed world, syphilis infections were in decline until the 1980s and 1990s due to widespread use of antibiotics. Since the year 2000, rates of syphilis have been increasing in the US, UK, Australia, and Europe primarily among men who have sex with men. This is attributed to unsafe sexual practices. A Sexually transmitted disease (STD) Surveillance study done by the Centers for Disease Control and Prevention in 2016 showed that men who have sex with men only account for over half (52%) of the 27,814 cases during that year. Nationally, the highest rates of primary and secondary syphilis in 2016 were observed among men aged 20–34 years, among men in the West, and among Black men.
Increased rates among heterosexuals have occurred in China and Russia since the 1990s. Syphilis increases the risk of HIV transmission by two to five times and co-infection is common (30–60% in a number of urban centers).
Untreated, it has a mortality rate of 8% to 58%, with a greater death rate in males. The higher incidence of mortality among males compared to females is not well understood, but is thought to be related to immunological differences across gender. The symptoms of syphilis have become less severe over the 19th and 20th century in part due to widespread availability of effective treatment and partly due to decreasing virulence of the spirochete. With early treatment few complications result.
China
In China rates of syphilis have increased from the 1990s to the 2010s. This occurred after a successful campaign to reduce rates was carried out in the 1950s. Rates of diagnosis are hig
Document 1:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
Document 2:::
The first recorded outbreak of syphilis in Europe occurred in 1494/1495 in Naples, Italy, during a French invasion. Because it was spread by returning French troops, the disease was known as "French disease", and it was not until 1530 that the term "syphilis" was first applied by the Italian physician and poet Girolamo Fracastoro. The causative organism, Treponema pallidum, was first identified by Fritz Schaudinn and Erich Hoffmann in 1905 at the Charité Clinic in Berlin. The first effective treatment, Salvarsan, was developed in 1910 by Sahachiro Hata in the laboratory of Paul Ehrlich. It was followed by the introduction of penicillin in 1943.
Many well-known figures, including Scott Joplin, Franz Schubert, Friedrich Nietzsche, Al Capone, and Édouard Manet are believed to have contracted the disease.
Origin
The history of syphilis has been well studied, but the exact origin of the disease remains unknown. There are two primary hypotheses: one proposes that syphilis was carried to Europe from the Americas by the crew(s) of Christopher Columbus as a byproduct of the Columbian exchange, while the other proposes that syphilis previously existed in Europe but went unrecognized. There has been a recent skeletal discovery in the Yucatan Peninsula dating over 9,900 years ago of a 30 year old woman who had Treponema peritonitis, a disease related to syphilis. "There is also evidence for a possible trepanomal bacterial disease that caused severe alteration of the posterior parietal and occipital bones of the cranium." Syphilis was the first "new" disease to be discovered after the invention of printing. News of it spread quickly and widely, and documentation is abundant. For the time, it was "front page news" that was widely known among the literate. It is also the first disease to be widely recognized as a sexually transmitted disease, and it was taken as indicative of the moral state (sexual behavior) of the peoples in which it was found. Its geographic origin and moral
Document 3:::
Infectious diseases or ID, also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates the cause of a disease to determine what kind of Bacteria, viruses, parasites, or fungi the disease is caused by. Once the pathogen is known, an ID specialist can then run various tests to determine the best antimicrobial drug to kill the pathogen and treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines.
Scope
Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin.
Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS.
History
Inf
Document 4:::
Bejel, or endemic syphilis, is a chronic skin and tissue disease caused by infection by the endemicum subspecies of the spirochete Treponema pallidum. Bejel is one of the "endemic treponematoses" (endemic infections caused by spiral-shaped bacteria called treponemes), a group that also includes yaws and pinta. Typically, endemic trepanematoses begin with localized lesions on the skin or mucous membranes. Pinta is limited to affecting the skin, whereas bejel and yaws are considered to be invasive because they can also cause disease in bone and other internal tissues.
Signs and symptoms
Bejel usually begins in childhood as a small patch on the mucosa, often on the interior of the mouth, followed by the appearance of raised, eroding lesions on the limbs and trunk. Periostitis (inflammation) of the leg bones is commonly seen, and gummas of the nose and soft palate develop in later stages.
Causes
Although the organism that causes bejel, Treponema pallidum endemicum, is morphologically and serologically indistinguishable from Treponema pallidum pallidum, which causes venereal syphilis, transmission of bejel is not venereal in nature.
Diagnosis
The diagnosis of bejel is based on the geographic history of the patient as well as laboratory testing of material from the lesions (dark-field microscopy). The responsible spirochaete is readily identifiable on sight in a microscope as a treponema.
Epidemiology
Bejel is mainly found in arid countries of the eastern Mediterranean region and in West Africa, where it is known as sahel.
See also
Pinta (disease)
Syphilis
Yaws
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most cases of syphilis can be cured with what?
A. antibiotics
B. enzymes
C. abstinence
D. vitamins
Answer:
|
|
sciq-9641
|
multiple_choice
|
How many valence electrons does carbon have?
|
[
"five",
"one",
"four",
"two"
] |
C
|
Relavent Documents:
Document 0:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many valence electrons does carbon have?
A. five
B. one
C. four
D. two
Answer:
|
|
sciq-11451
|
multiple_choice
|
What is the process of changing something from a gas to a liquid?
|
[
"fermentation",
"combustion",
"condensation",
"sublimation"
] |
C
|
Relavent Documents:
Document 0:::
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Homogenization or homogenisation is any of several processes used to make a mixture of two mutually non-soluble liquids the same throughout. This is achieved by turning one of the liquids into a state consisting of extremely small particles distributed uniformly throughout the other liquid. A typical example is the homogenization of milk, wherein the milk fat globules are reduced in size and dispersed uniformly through the rest of the milk.
Definition
Homogenization (from "homogeneous;" Greek, homogenes: homos, same + genos, kind) is the process of converting two immiscible liquids (i.e. liquids that are not soluble, in all proportions, one in another) into an emulsion (Mixture of two or more liquids that are generally immiscible). Sometimes two types of homogenization are distinguished: primary homogenization, when the emulsion is created directly from separate liquids; and secondary homogenization, when the emulsion is created by the reduction in size of droplets in an existing emulsion.
Homogenization is achieved by a mechanical device called a homogenizer.
Application
One of the oldest applications of homogenization is in milk processing. It is normally preceded by "standardization" (the mixing of milk from several different herds or dairies to produce a more consistent raw milk prior to processing). The fat in milk normally separates from the water and collects at the top. Homogenization breaks the fat into smaller sizes so it no longer separates, allowing the sale of non-separating milk at any fat specification.
Methods
Milk homogenization is accomplished by mixing large amounts of harvested milk, then forcing the milk at high pressure through small holes. Milk homogenization is an essential tool of the milk food industry to prevent creating various levels of flavor and fat concentration.
Another application of homogenization is in soft drinks like cola products. The reactant mixture is rendered to intense homogenization, to as much as 35,000 psi, so tha
Document 3:::
In chemistry, a chemical transport reaction describes a process for purification and crystallization of non-volatile solids. The process is also responsible for certain aspects of mineral growth from the effluent of volcanoes. The technique is distinct from chemical vapor deposition, which usually entails decomposition of molecular precursors and which gives conformal coatings.
The technique, which was popularized by Harald Schäfer, entails the reversible conversion of nonvolatile elements and chemical compounds into volatile derivatives. The volatile derivative migrates throughout a sealed reactor, typically a sealed and evacuated glass tube heated in a tube furnace. Because the tube is under a temperature gradient, the volatile derivative reverts to the parent solid and the transport agent is released at the end opposite to which it originated (see next section). The transport agent is thus catalytic. The technique requires that the two ends of the tube (which contains the sample to be crystallized) be maintained at different temperatures. So-called two-zone tube furnaces are employed for this purpose. The method derives from the Van Arkel de Boer process which was used for the purification of titanium and vanadium and uses iodine as the transport agent.
Cases of the exothermic and endothermic reactions of the transporting agent
Transport reactions are classified according to the thermodynamics of the reaction between the solid and the transporting agent. When the reaction is exothermic, then the solid of interest is transported from the cooler end (which can be quite hot) of the reactor to a hot end, where the equilibrium constant is less favorable and the crystals grow. The reaction of molybdenum dioxide with the transporting agent iodine is an exothermic process, thus the MoO2 migrates from the cooler end (700 °C) to the hotter end (900 °C):
MoO2 + I2 MoO2I2 ΔHrxn < 0 (exothermic)
Using 10 milligrams of iodine for 4 grams of the solid, the proc
Document 4:::
Vaporization (or vaporisation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: Evaporation and boiling. Evaporation is a surface phenomenon, where as boiling is a bulk phenomenon.
Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid.
Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment.
Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Because it does not involve the liquid phase, it is not a form of vaporization.
The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO.
At the moment o
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process of changing something from a gas to a liquid?
A. fermentation
B. combustion
C. condensation
D. sublimation
Answer:
|
|
scienceQA-9471
|
multiple_choice
|
Select the animal.
|
[
"Chili peppers have green leaves.",
"Apple trees can grow fruit.",
"Brown pelicans eat fish.",
"Cedar trees have small leaves."
] |
C
|
A chili pepper is a plant. It has many green leaves.
Chili peppers give food a spicy flavor.
A cedar tree is a plant. It has small leaves.
Cedar trees grow in many parts of the world. Many cedar trees grow on mountains.
A brown pelican is an animal. It eats fish.
A brown pelican is a bird. Brown pelicans live near water and dive to catch fish.
An apple tree is a plant. It can grow fruit.
People have been growing apples for thousands of years. There are more than 7,500 types of apples!
|
Relavent Documents:
Document 0:::
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education.
Project Members
Oregon State University
New York Botanical Garden
L. H. Bailey Hortorium at Cornell University
Ensembl
SoyBase
SSWAP
SGN
Gramene
The Arabidopsis Information Resource (TAIR)
MaizeGDB
University of Missouri at St. Louis
Missouri Botanical Garden
See also
Generic Model Organism Database
Open Biomedical Ontologies
OBO Foundry
Document 1:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
Document 2:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 3:::
The Desert Garden Conservatory is a large botanical greenhouse and part of the Huntington Library, Art Collections and Botanical Gardens, in San Marino, California. It was constructed in 1985. The Desert Garden Conservatory is adjacent to the Huntington Desert Garden itself. The garden houses one of the most important collections of cacti and other succulent plants in the world, including a large number of rare and endangered species. The Desert Garden Conservatory serves The Huntington and public communities as a conservation facility, research resource and genetic diversity preserve. John N. Trager is the Desert Collection curator.
There are an estimated 10,000 succulents worldwide, about 1,500 of them classified as cacti. The Huntington Desert Garden Conservatory now contains more than 2,200 accessions, representing more than 43 plant families, 1,261 different species and subspecies, and 246 genera. The plant collection contains examples from the world's major desert regions, including the southern United States, Argentina, Bolivia, Chile, Brazil, Canary Islands, Madagascar, Malawi, Mexico and South Africa. The Desert Collection plays a critical role as a repository of biodiversity, in addition to serving as an outreach and education center.
Propagation program to save rare and endangered plants
Some studies estimate that as many as two-thirds of the world's flora and fauna may become extinct during the course of the 21st century, the result of global warming and encroaching development. Scientists alarmed by these prospects are working diligently to propagate plants outside their natural habitats, in protected areas. Ex-situ cultivation, as this practice is known, can serve as a stopgap for plants that will otherwise be lost to the world as their habitats disappear. To this end, The Huntington has a program to protect and plant propagate endangered plant species, designated International Succulent Introductions (ISI).
The aim of the ISI program is to pr
Document 4:::
Chard or Swiss chard (; Beta vulgaris subsp. vulgaris, Cicla Group and Flavescens Group) is a green leafy vegetable. In the cultivars of the Flavescens Group, the leaf stalks are large and often prepared separately from the leaf blade; the Cicla Group is the leafy spinach beet. The leaf blade can be green or reddish; the leaf stalks are usually white, yellow or red.
Chard, like other green leafy vegetables, has highly nutritious leaves. Chard has been used in cooking for centuries, but because it is the same species as beetroot, the common names that cooks and cultures have used for chard may be confusing; it has many common names, such as silver beet, perpetual spinach, beet spinach, seakale beet, or leaf beet.
Classification
Chard was first described in 1753 by Carl Linnaeus as Beta vulgaris var. cicla. Its taxonomic rank has changed many times: it has been treated as a subspecies, a convariety, and a variety of Beta vulgaris. (Among the numerous synonyms for it are Beta vulgaris subsp. cicla (L.) W.D.J. Koch (Cicla Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. cicla L., B. vulgaris var. cycla (L.) Ulrich, B. vulgaris subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Spinach Beet Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch (Flavescens Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. flavescens (Lam.) DC., B. vulgaris L. subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Swiss Chard Group)). The accepted name for all beet cultivars, like chard, sugar beet and beetroot, is Beta vulgaris subsp. vulgaris. They are cultivated descendants of the sea beet, Beta vulgaris subsp. maritima. Chard belongs to the chenopods, which are now mostly included in the family Amaranthaceae (sensu lato).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the animal.
A. Chili peppers have green leaves.
B. Apple trees can grow fruit.
C. Brown pelicans eat fish.
D. Cedar trees have small leaves.
Answer:
|
sciq-9309
|
multiple_choice
|
What type of tissue transports water, nutrients, and food in plants?
|
[
"smooth tissue",
"normal tissue",
"vascular tissue",
"rough tissue"
] |
C
|
Relavent Documents:
Document 0:::
In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues.
Biological organisms follow this hierarchy:
Cells < Tissue < Organ < Organ System < Organism
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Plant tissue
In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue.
Epidermis – Cells forming the outer surface of the leaves and of the young plant body.
Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally.
Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients.
Plant tissues can also be divided differently into two types:
Meristematic tissues
Permanent tissues.
Meristematic tissue
Meristematic tissue consists of actively dividing cell
Document 1:::
Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant.
The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well.
Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium g
Document 2:::
Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones.
Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific.
Characteristics
Botanists define vascular plants by three primary characteristics:
Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes.
In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with
Document 3:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 4:::
Xylem is one of the two types of transport tissue in vascular plants, the other being phloem. The basic function of the xylem is to transport water from roots to stems and leaves, but it also transports nutrients. The word xylem is derived from the Ancient Greek word (xylon), meaning "wood"; the best-known xylem tissue is wood, though it is found throughout a plant. The term was introduced by Carl Nägeli in 1858.
Structure
The most distinctive xylem cells are the long tracheary elements that transport water. Tracheids and vessel elements are distinguished by their shape; vessel elements are shorter, and are connected together into long tubes that are called vessels.
Xylem also contains two other type of cells: parenchyma and fibers.
Xylem can be found:
in vascular bundles, present in non-woody plants and non-woody parts of woody plants
in secondary xylem, laid down by a meristem called the vascular cambium in woody plants
as part of a stelar arrangement not divided into bundles, as in many ferns.
In transitional stages of plants with secondary growth, the first two categories are not mutually exclusive, although usually a vascular bundle will contain primary xylem only.
The branching pattern exhibited by xylem follows Murray's law.
Primary and secondary xylem
Primary xylem is formed during primary growth from procambium. It includes protoxylem and metaxylem. Metaxylem develops after the protoxylem but before secondary xylem. Metaxylem has wider vessels and tracheids than protoxylem.
Secondary xylem is formed during secondary growth from vascular cambium. Although secondary xylem is also found in members of the gymnosperm groups Gnetophyta and Ginkgophyta and to a lesser extent in members of the Cycadophyta, the two main groups in which secondary xylem can be found are:
conifers (Coniferae): there are approximately 600 known species of conifers. All species have secondary xylem, which is relatively uniform in structure throughout this group. Many conife
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of tissue transports water, nutrients, and food in plants?
A. smooth tissue
B. normal tissue
C. vascular tissue
D. rough tissue
Answer:
|
|
sciq-5278
|
multiple_choice
|
What is part of a cycle that holds an element or water for a long period of time called?
|
[
"a holding tank",
"a ditch",
"a reservoir",
"homeostasis"
] |
C
|
Relavent Documents:
Document 0:::
The International Space Station Environmental Control and Life Support System (ECLSS) is a life support system that provides or controls atmospheric pressure, fire detection and suppression, oxygen levels, waste management and water supply. The highest priority for the ECLSS is the ISS atmosphere, but the system also collects, processes, and stores both waste and water produced and used by the crew—a process that recycles fluid from the sink, shower, toilet, and condensation from the air.
The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station.
The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters.
Carbon dioxide is removed from the air by the Vozdukh system in Zvezda, one Carbon Dioxide Removal Assembly (CDRA) located in the U.S. Lab module, and one CDRA in the U.S. Node 3 module. Other by-products of human metabolism, such as methane from flatulence and ammonia from sweat, are removed by activated charcoal filters or by the Trace Contaminant Control System (TCCS).
Water recovery systems
The ISS has two water recovery systems. Zvezda contains a water recovery system that processes water vapor from the atmosphere that could be used for drinking in an emergency but is normally fed to the Elektron system to produce oxygen. The American segment has a Water Recovery System installed during STS-126 that can process water vapour collected from the atmosphere and urine into water that is intended for drinking. The Water Recovery System was installed initially in Destiny on a temporary basis in November 2008 and moved into Tranquility (Node 3) in February 2010.
The Water Recovery System consists of a Urine Processor Assembly and a Water Processor Assembly, housed in two of the three ECLSS racks.
The Urine Processor Assembly uses a low pressure vacuum distillation process that uses a centrifuge to compensate for the lack of gravity and thus aid in separating liquids and g
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere.
For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients.
There are bio
Document 3:::
The water cycle, also known as the hydrologic cycle or the hydrological cycle, is a biogeochemical cycle that describes the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time but the partitioning of the water into the major reservoirs of ice, fresh water, saline water (salt water) and atmospheric water is variable depending on a wide range of climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere, by the physical processes of evaporation, transpiration, condensation, precipitation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The ocean plays a key role in the water cycle as it is the source of 86% of global evaporation.
The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence climate.
The evaporative phase of the cycle purifies water, causing salts and other solids picked up during the cycle to be left behind, and then the condensation phase in the atmosphere replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It is also involved in reshaping the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet.
Description
Overall process
The water cycle is powered from the energy emitted by the sun. This energy heats water in the ocean and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. Th
Document 4:::
In mathematics, the are three disjoint connected open sets of the plane or open unit square with the counterintuitive property that they all have the same boundary. In other words, for any point selected on the boundary of one of the lakes, the other two lakes' boundaries also contain that point.
More than two sets with the same boundary are said to have the Wada property; examples include Wada basins in dynamical systems. This property is rare in real-world systems.
The lakes of Wada were introduced by , who credited the discovery to Takeo Wada. His construction is similar to the construction by of an indecomposable continuum, and in fact it is possible for the common boundary of the three sets to be an indecomposable continuum.
Construction of the lakes of Wada
The Lakes of Wada are formed by starting with a closed unit square of dry land, and then digging 3 lakes according to the following rule:
On day n = 1, 2, 3,... extend lake n mod 3 (= 0, 1, 2) so that it is open and connected and passes within a distance 1/n of all remaining dry land. This should be done so that the remaining dry land remains homeomorphic to a closed unit square.
After an infinite number of days, the three lakes are still disjoint connected open sets, and the remaining dry land is the boundary of each of the 3 lakes.
For example, the first five days might be (see the image on the right):
Dig a blue lake of width 1/3 passing within /3 of all dry land.
Dig a red lake of width 1/32 passing within /32 of all dry land.
Dig a green lake of width 1/33 passing within /33 of all dry land.
Extend the blue lake by a channel of width 1/34 passing within /34 of all dry land. (The small channel connects the thin blue lake to the thick one, near the middle of the image.)
Extend the red lake by a channel of width 1/35 passing within /35 of all dry land. (The tiny channel connects the thin red lake to the thick one, near the top left of the image.)
A variation of this construction can produce
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is part of a cycle that holds an element or water for a long period of time called?
A. a holding tank
B. a ditch
C. a reservoir
D. homeostasis
Answer:
|
|
sciq-7285
|
multiple_choice
|
What serves as the control center of the nervous system?
|
[
"bone",
"spine",
"brain",
"heart"
] |
C
|
Relavent Documents:
Document 0:::
The following diagram is provided as an overview of and topical guide to the human nervous system:
Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system.
Evolution of the human nervous system
Evolution of nervous systems
Evolution of human intelligence
Evolution of the human brain
Paleoneurology
Some branches of science that study the human nervous system
Neuroscience
Neurology
Paleoneurology
Central nervous system
The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord.
Spinal cord
Brain
Brain – center of the nervous system.
Outline of the human brain
List of regions of the human brain
Principal regions of the vertebrate brain:
Peripheral nervous system
Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS.
Sensory system
A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception.
List of sensory systems
Sensory neuron
Perception
Visual system
Auditory system
Somatosensory system
Vestibular system
Olfactory system
Taste
Pain
Components of the nervous system
Neuron
I
Document 1:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
Document 2:::
The chemoreceptor trigger zone (CTZ) is an area of the medulla oblongata that receives inputs from blood-borne drugs or hormones, and communicates with other structures in the vomiting center to initiate vomiting. The CTZ is located within the area postrema, which is on the floor of the fourth ventricle and is outside of the blood–brain barrier. It is also part of the vomiting center itself. The neurotransmitters implicated in the control of nausea and vomiting include acetylcholine, dopamine, histamine (H1 receptor), substance P (NK-1 receptor), and serotonin (5-HT3 receptor). There are also opioid receptors present, which may be involved in the mechanism by which opiates cause nausea and vomiting. The blood–brain barrier is not as developed here; therefore, drugs such as dopamine which cannot normally enter the CNS may still stimulate the CTZ.
Evolutionary significance
The CTZ is in the medulla oblongata, which is phylogenetically the oldest part of the central nervous system. Early lifeforms developed a brainstem, or inner brain, and nothing more. This part of the brain is responsible for basic survival instincts and reactions, for example to make an organism turn its head and look where an auditory stimulus was heard. The brainstem is where the medulla is located, and therefore also the area postrema and the CTZ. Then later lifeforms developed another segment of the brain, which includes the limbic system. This area of the brain is responsible for producing emotion and emotional responses to external stimuli, and also is significantly involved in memory and reward systems. Evolutionarily, the cerebral cortex is the most recent development. This area of the brain is responsible for critical thinking and reasoning, and is actively involved in decision making. It has been discovered that a major cause of increased intelligence in species including humans is the increase in cortical neurons in the brain. The emetic response was selected for protective purposes, and
Document 3:::
The human brain anatomical regions are ordered following standard neuroanatomy hierarchies. Functional, connective, and developmental regions are listed in parentheses where appropriate.
Hindbrain (rhombencephalon)
Myelencephalon
Medulla oblongata
Medullary pyramids
Arcuate nucleus
Olivary body
Inferior olivary nucleus
Rostral ventrolateral medulla
Caudal ventrolateral medulla
Solitary nucleus (Nucleus of the solitary tract)
Respiratory center-Respiratory groups
Dorsal respiratory group
Ventral respiratory group or Apneustic centre
Pre-Bötzinger complex
Botzinger complex
Retrotrapezoid nucleus
Nucleus retrofacialis
Nucleus retroambiguus
Nucleus para-ambiguus
Paramedian reticular nucleus
Gigantocellular reticular nucleus
Parafacial zone
Cuneate nucleus
Gracile nucleus
Perihypoglossal nuclei
Intercalated nucleus
Prepositus nucleus
Sublingual nucleus
Area postrema
Medullary cranial nerve nuclei
Inferior salivatory nucleus
Nucleus ambiguus
Dorsal nucleus of vagus nerve
Hypoglossal nucleus
Chemoreceptor trigger zone
Metencephalon
Pons
Pontine nuclei
Pontine cranial nerve nuclei
Chief or pontine nucleus of the trigeminal nerve sensory nucleus (V)
Motor nucleus for the trigeminal nerve (V)
Abducens nucleus (VI)
Facial nerve nucleus (VII)
Vestibulocochlear nuclei (vestibular nuclei and cochlear nuclei) (VIII)
Superior salivatory nucleus
Pontine tegmentum
Pontine micturition center (Barrington's nucleus)
Locus coeruleus
Pedunculopontine nucleus
Laterodorsal tegmental nucleus
Tegmental pontine reticular nucleus
Nucleus incertus
Parabrachial area
Medial parabrachial nucleus
Lateral parabrachial nucleus
Subparabrachial nucleus (Kölliker-Fuse nucleus)
Pontine respiratory group
Superior olivary complex
Medial superior olive
Lateral superior olive
Medial nucleus of the trapezoid body
Paramedian pontine reticular formation
Parvocellular reticular nucleus
Caudal pontine reticular nucleus
Cerebellar peduncles
Superior cerebellar peduncle
Middle cerebellar peduncle
Inferior
Document 4:::
The brain (or encephalon) is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most specialized and energy-consuming organ in the body, responsible for complex sensory perception, motor control, endocrine regulation and the development of intelligence.
While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates.
In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via root-like protrusions called dendrites and long fiber-like extensions called axons, which are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well devel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What serves as the control center of the nervous system?
A. bone
B. spine
C. brain
D. heart
Answer:
|
|
sciq-5283
|
multiple_choice
|
What gas, that is dissolved in solution, is used in carbonated beverages?
|
[
"carbon monoxide",
"hydrogen peroxide",
"phosphorus dioxide",
"carbon dioxide"
] |
D
|
Relavent Documents:
Document 0:::
Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that increase the surface area available for adsorption (which is not the same as absorption) or chemical reactions. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active.
Due to its high degree of microporosity, one gram of activated carbon has a surface area in excess of as determined by gas adsorption. Charcoal, before activation, has a specific surface area in the range of . An activation level sufficient for useful application may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties.
Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being 'activated'. When derived from coal it is referred to as activated coal. Activated coke is derived from coke.
Uses
Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications.
Industrial
One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing
Document 1:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 2:::
In the alcoholic beverages industry, congeners are substances, other than the desired type of alcohol, ethanol, produced during fermentation. These substances include small amounts of chemicals such as methanol and other alcohols (known as fusel alcohols), acetone, acetaldehyde, esters, tannins, and aldehydes (e.g. furfural). Congeners are responsible for most of the taste and aroma of distilled alcoholic beverages, and contribute to the taste of non-distilled drinks. Brandy, rum and red wine have the highest amount of congeners, while vodka and beer have the least.
Congeners are the basis of alcohol congener analysis, a sub-discipline of forensic toxicology which determines what a person drank.
There is some evidence that high-congener drinks induce more severe hangovers, but the effect is not well studied and is still secondary to the total amount of ethanol consumed.
See also
Alcohol (drug)
Alcohol congener analysis
Wine chemistry
Document 3:::
Isinglass ( ) is a substance obtained from the dried swim bladders of fish. It is a form of collagen used mainly for the clarification or fining of some beer and wine. It can also be cooked into a paste for specialised gluing purposes.
The English word origin is from the obsolete Dutch huizenblaas – huizen is a kind of sturgeon, and blaas is a bladder, or German Hausenblase, meaning essentially the same.
Although originally made exclusively from sturgeon, especially beluga, in 1795 an invention by William Murdoch facilitated a cheap substitute using cod. This was extensively used in Britain in place of Russian isinglass, and in the US hake was important. In modern British brewing all commercial isinglass products are blends of material from a limited range of tropical fish. The bladders, once removed from the fish, processed, and dried, are formed into various shapes for use.
Foods and drinks
Before the inexpensive production of gelatin and other competing products, isinglass was used in confectionery and desserts such as fruit jelly and blancmange.
Isinglass finings are widely used as a processing aid in the British brewing industry to accelerate the fining, or clarification, of beer. It is used particularly in the production of cask-conditioned beers, although many cask ales are available which are not fined using isinglass. The finings flocculate the live yeast in the beer into a jelly-like mass, which settles to the bottom of the cask. Left undisturbed, beer will clear naturally; the use of isinglass finings accelerates the process. Isinglass is sometimes used with an auxiliary fining, which further accelerates the process of sedimentation.
Non-cask beers that are destined for kegs, cans, or bottles are often pasteurised and filtered. The yeast in these beers tends to settle to the bottom of the storage tank naturally, so the sediment from these beers can often be filtered without using isinglass. However, some breweries still use isinglass finings for n
Document 4:::
Liquefied gas (sometimes referred to as liquid gas) is a gas that has been turned into a liquid by cooling or compressing it. Examples of liquefied gases include liquid air, liquefied natural gas, and liquefied petroleum gas.
Liquid air
At the Lister Institute of Preventive Medicine, liquid air has been brought into use as an agent in biological research. An inquiry into the intracellular constituents of the typhoid bacillus, initiated under the direction of Doctor Allan Macfadyen, necessitated the separation of the cell-plasma of the organism. The method at first adopted for the disintegration of the bacteria was to mix them with silver-sand and churn the whole up in a closed vessel in which a series of horizontal vanes revolved at a high speed. But certain disadvantages attached to this procedure, and accordingly some means was sought to do away with the sand and triturate the bacilli per se. This was found in liquid air, which, as had long before been shown at the Royal Institution, has the power of reducing materials like grass or the leaves of plants to such a state of brittleness that they can easily be powdered in a mortar. By its aid a complete trituration of the typhoid bacilli has been accomplished at the Jenner Institute, and the same process, already applied with success also to yeast cells and animal cells, is being extended in other directions.
When air is liquefied the oxygen and nitrogen are condensed simultaneously. However, owing to its greater volatility the latter boils off the more quickly of the two, so that the remaining liquid becomes gradually richer and richer in oxygen.
Liquefied natural gas
Liquefied natural gas is natural gas that has been liquefied for the purpose of storage or transport. Since transportation of natural gas requires a large network of pipeline that crosses through various terrains and oceans, a huge investment and long term planning are required. Before transport, natural gas is liquefied by pressurization. The li
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What gas, that is dissolved in solution, is used in carbonated beverages?
A. carbon monoxide
B. hydrogen peroxide
C. phosphorus dioxide
D. carbon dioxide
Answer:
|
|
sciq-8971
|
multiple_choice
|
What is emitted by nuclei in alpha, beta, and gamma decay?
|
[
"convection",
"magnetic field",
"radiation",
"solar energy"
] |
C
|
Relavent Documents:
Document 0:::
A gamma ray, also known as gamma radiation (symbol γ or ), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves, typically shorter than those of X-rays. With frequencies above 30 exahertz (), it imparts the highest photon energy. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays based on their relatively strong penetration of matter; in 1900 he had already named two less penetrating types of decay radiation (discovered by Henri Becquerel) alpha rays and beta rays in ascending order of penetrating power.
Gamma rays from radioactive decay are in the energy range from a few kiloelectronvolts (keV) to approximately 8 megaelectronvolts (MeV), corresponding to the typical energy levels in nuclei with reasonably long lifetimes. The energy spectrum of gamma rays can be used to identify the decaying radionuclides using gamma spectroscopy. Very-high-energy gamma rays in the 100–1000 teraelectronvolt (TeV) range have been observed from sources such as the Cygnus X-3 microquasar.
Natural sources of gamma rays originating on Earth are mostly a result of radioactive decay and secondary radiation from atmospheric interactions with cosmic ray particles. However, there are other rare natural sources, such as terrestrial gamma-ray flashes, which produce gamma rays from electron action upon the nucleus. Notable artificial sources of gamma rays include fission, such as that which occurs in nuclear reactors, and high energy physics experiments, such as neutral pion decay and nuclear fusion.
Gamma rays and X-rays are both electromagnetic radiation, and since they overlap in the electromagnetic spectrum, the terminology varies between scientific disciplines. In some fields of physics, they are distinguished by their origin: Gamma rays are creat
Document 1:::
Reaction products
This sequence of reactions can be understood by thinking of the two interacting carbon nuclei as coming together to form an excited state of the 24Mg nucleus, which then decays in one of the five ways listed above. The first two reactions are strongly exothermic, as indicated by the large positive energies released, and ar
Document 2:::
In nuclear physics and chemistry, the value for a reaction is the amount of energy absorbed or released during the nuclear reaction. The value relates to the enthalpy of a chemical reaction or the energy of radioactive decay products. It can be determined from the masses of reactants and products. values affect reaction rates. In general, the larger the positive value for the reaction, the faster the reaction proceeds, and the more likely the reaction is to "favor" the products.
where the masses are in atomic mass units. Also, both and are the sums of the reactant and product masses respectively.
Definition
The conservation of energy, between the initial and final energy of a nuclear process enables the general definition of based on the mass–energy equivalence. For any radioactive particle decay, the kinetic energy difference will be given by:
where denotes the kinetic energy of the mass .
A reaction with a positive value is exothermic, i.e. has a net release of energy, since the kinetic energy of the final state is greater than the kinetic energy of the initial state.
A reaction with a negative value is endothermic, i.e. requires a net energy input, since the kinetic energy of the final state is less than the kinetic energy of the initial state. Observe that a chemical reaction is exothermic when it has a negative enthalpy of reaction, in contrast a positive value in a nuclear reaction.
The value can also be expressed in terms of the Mass excess of the nuclear species as:
Proof The mass of a nucleus can be written as where is the mass number (sum of number of protons and neutrons) and MeV/c. Note that the count of nucleons is conserved in a nuclear reaction. Hence, and .
Applications
Chemical values are measurement in calorimetry. Exothermic chemical reactions tend to be more spontaneous and can emit light or heat, resulting in runaway feedback(i.e. explosions).
values are also featured in particle physics. For example,
Document 3:::
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide.
Radiation interactions with matter
As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system.
Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation.
An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usuall
Document 4:::
Ionizing radiation (or ionising radiation), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum.
Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, nearly all types of laser light, infrared, microwaves, and radio waves are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area cannot be sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV.
Typical ionizing subatomic particles include alpha particles, beta particles, and neutrons. These are typically created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission.
Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome).
Ionizing radiation is used in a wide variety of field
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is emitted by nuclei in alpha, beta, and gamma decay?
A. convection
B. magnetic field
C. radiation
D. solar energy
Answer:
|
|
sciq-3619
|
multiple_choice
|
Most sedimentary rocks form from what?
|
[
"sediments",
"volcanic activity",
"glaciers",
"erosion"
] |
A
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 2:::
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle.
Transition to igneous rock
When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks.
Secondary changes
Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
Document 3:::
Blood Falls is an outflow of an iron oxide–tainted plume of saltwater, flowing from the tongue of Taylor Glacier onto the ice-covered surface of West Lake Bonney in the Taylor Valley of the McMurdo Dry Valleys in Victoria Land, East Antarctica.
Iron-rich hypersaline water sporadically emerges from small fissures in the ice cascades. The saltwater source is a subglacial pool of unknown size overlain by about of ice several kilometers from its tiny outlet at Blood Falls.
The reddish deposit was found in 1911 by the Australian geologist Thomas Griffith Taylor, who first explored the valley that bears his name. The Antarctica pioneers first attributed the red color to red algae, but later it was proven to be due to iron oxides.
Geochemistry
Poorly soluble hydrous ferric oxides are deposited at the surface of ice after the ferrous ions present in the unfrozen saltwater are oxidized in contact with atmospheric oxygen. The more soluble ferrous ions initially are dissolved in old seawater trapped in an ancient pocket remaining from the Antarctic Ocean when a fjord was isolated by the glacier in its progression during the Miocene period, some 5 million years ago, when the sea level was higher than today.
Unlike most Antarctic glaciers, the Taylor Glacier is not frozen to the bedrock, probably because of the presence of salts concentrated by the crystallization of the ancient seawater imprisoned below it. Salt cryo-concentration occurred in the deep relict seawater when pure ice crystallized and expelled its dissolved salts as it cooled down because of the heat exchange of the captive liquid seawater with the enormous ice mass of the glacier. As a consequence, the trapped seawater was concentrated in brines with a salinity two to three times that of the mean ocean water. A second mechanism sometimes also explaining the formation of hypersaline brines is the water evaporation of surface lakes directly exposed to the very dry polar atmosphere in the McMurdo Dry Valleys. Th
Document 4:::
Seismic stratigraphy is a method for studying sedimentary rock in the deep subsurface based on seismic data acquisition.
History
The term Seismic stratigraphy was introduced in 1977 by Vail as an integrated stratigraphic and sedimentologic technique to interpret seismic reflection data for stratigraphic correlation and to predict depositional environments and lithology. This technique was initially employed for petroleum exploration and subsequently evolved into sequence stratigraphy by academic institutes.
Basic Concept
Seismic reflection is generated at interfaces that separate media with different acoustic properties, and traditionally these interfaces have been interpreted as the lithological boundaries. Vail in 1977, however, recognized that these reflections were, in fact, parallel to the bedding surfaces, and therefore time equivalent surfaces. Interruption of reflections indicates the disappearance of bedding surfaces. Hence, onlap, down lap and top lap and other depositional features observed on surface outcrops have been demonstrated on seismic profiles. This revolutionary interpretation has been substantiated by Vail’s associated industrial drilling results and extensive multichannel seismic data. Furthermore, the most indisputable evidence comes from the progradational dipping reflection pattern associated with the advancing delta deposition in shallow marine environments. Lithological boundaries associated with delta front and slope are nearly horizontal, but are not represented by reflections. Instead, the dipping reflections are a clear indication of depositional surfaces, hence time plane equivalents.
Methodology
Establishing Sequence Boundary
Sequence boundaries are defined as an erosional unconformity recognized on the seismic profile as a reflection surface with reflection termination features such as truncation below and onlap above the surface, The sequence boundary, therefore, represents a marine regression event, during which continenta
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most sedimentary rocks form from what?
A. sediments
B. volcanic activity
C. glaciers
D. erosion
Answer:
|
|
sciq-8290
|
multiple_choice
|
Sediments in oligotrophic lakes contain large amounts of what?
|
[
"igneous rocks",
"fertilizer",
"algae",
"decomposable organic matter"
] |
D
|
Relavent Documents:
Document 0:::
Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself.
Lake ecosystem
Geography
The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments.
Eutrophication experiment
To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi
Document 1:::
Paleolimnology (from Greek: παλαιός, palaios, "ancient", λίμνη, limne, "lake", and λόγος, logos, "study") is a scientific sub-discipline closely related to both limnology and paleoecology. Paleolimnological studies focus on reconstructing the past environments of inland waters (e.g., lakes and streams) using the geologic record, especially with regard to events such as climatic change, eutrophication, acidification, and internal ontogenic processes.
Paleolimnological studies are mostly conducted using analyses of the physical, chemical, and mineralogical properties of sediments, or of biological records such as fossil pollen, diatoms, or chironomids.
History
Lake ontogeny
Most early paleolimnological studies focused on the biological productivity of lakes, and the role of internal lake processes in lake development. Although Einar Naumann had speculated that the productivity of lakes should gradually decrease due to leaching of catchment soils, August Thienemann suggested that the reverse process likely occurred. Early midge records seemed to support Thienemann's view.
Hutchinson and Wollack suggested that, following an initial oligotrophic stage, lakes would achieve and maintain a trophic equilibrium. They also stressed parallels between the early development of lake communities and the sigmoid growth phase of animal communities – implying that the apparent early developmental processes in lakes were dominated by colonization effects, and lags due to the limited reproductive potential of the colonizing organisms.
In a classic paper, Raymond Lindeman outlined a hypothetical developmental sequence, with lakes progressively developing through oligotrophic, mesotrophic, and eutrophic stages, before senescing to a dystrophic stage and then filling completely with sediment. A climax forest community would eventually be established on the peaty fill of the former lake basin. These ideas were further elaborated by Ed Deevey, who suggested that lake development was dom
Document 2:::
Lake 227 is one of 58 lakes located in the Experimental Lakes Area (ELA) in the Kenora District of Ontario, Canada. Lake 227 is one of only 5 lakes in the Experimental Lakes Area currently involved in long-term research projects, and is of particular note for its importance in long term lake eutrophication studies. The relative absence human activity and pollution makes Lake 227 ideal for limnological research, and the nature of the ELA makes it one of the only places in the world accessible for full lake experiments. At its deepest, Lake 227 is 10 meters deep, and the area of the lake is approximately 5 hectares. Funding and governmental permissions for access to Lake 227 have been unstable in recent years, as control of the ELA was handed off by the Canadian government to the International Institute for Sustainable Development (IISD).
Ecology
Lake 227 is a freshwater lake. The ELA region is home to a variety of native fish, many of which are planktivorous. Fathead minnows, Fine-scale Dace, and Pearl Dace are all examples of fish that can be found in the lake. The presence of planktivorous fish reduces the relative abundance of larger zooplankton species in the lake, as species like the fathead minnow primarily feed on them. The fish populations in Lake 227 were removed in the 1990s, this resulted in a noticeable increase in the Chaoborus and daphnia populations, in the absence of predation. The removal of fish from the lake negates the top-down effect that repressed larger species of zooplankton and aquatic larvae.
Research
The research in lake 227 is mainly focused on the effects of manipulated nutrients on the interrelated independent variables of microorganism activity and eutrophication. Lake 227 was home to the longest running experiment ever to take place in the ELA.
Lake eutrophication and nutrient factors
Lake 227 has been used as a real life model for the study of the connection between nutrient input and lake eutrophication. The results of these
Document 3:::
In ecology, base-richness is the level of chemical bases in water or soil, such as calcium or magnesium ions. Many organisms prefer base-rich environments. Chemical bases are alkalis, hence base-rich environments are either neutral or alkaline. Because acid-rich environments have few bases, they are dominated by environmental acids (usually organic acids). However, the relationship between base-richness and acidity is not a rigid one – changes in the levels of acids (such as dissolved carbon dioxide) may significantly change acidity without affecting base-richness.
Base-rich terrestrial environments are characteristic of areas where underlying rocks (below soil) are limestone. Seawater is also base-rich, so maritime and marine environments are themselves base-rich.
Base-poor environments are characteristic of areas where underlying rocks (below soil) are sandstone or granite, or where the water is derived directly from rainfall (ombrotrophic).
Examples of base-rich environments
Calcareous grassland
Fen
Limestone pavement
Maquis shrubland
Yew woodland
Examples of base-poor environments
Bog
Heath (habitat)
Poor fen
Moorland
Pine woodland
Tundra
See also
Soil
Calcicole
Calcifuge
Ecology
Soil chemistry
Document 4:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Sediments in oligotrophic lakes contain large amounts of what?
A. igneous rocks
B. fertilizer
C. algae
D. decomposable organic matter
Answer:
|
|
sciq-2161
|
multiple_choice
|
Yolk is a very fragile substance found in the eggs of reptiles and needs protection. what serves as protection for the yolk?
|
[
"eye sac",
"fish sac",
"yolk sac",
"dish sac"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
An eggshell is the outer covering of a hard-shelled egg and of some forms of eggs with soft outer coats.
Worm eggs
Nematode eggs present a two layered structure: an external vitellin layer made of chitin that confers mechanical resistance and an internal lipid-rich layer that makes the egg chamber impermeable.
Insect eggs
Insects and other arthropods lay a large variety of styles and shapes of eggs. Some of them have gelatinous or skin-like coverings, others have hard eggshells. Softer shells are mostly protein. It may be fibrous or quite liquid. Some arthropod eggs do not actually have shells, rather, their outer covering is actually the outermost embryonic membrane, the choroid, which protects inner layers. This can be a complex structure, and it may have different layers, including an outermost layer called an exochorion. Eggs which must survive in dry conditions usually have hard eggshells, made mostly of dehydrated or mineralized proteins with pore systems to allow respiration. Arthropod eggs can have extensive ornamentation on their outer surfaces.
Fish, amphibian and reptile eggs
Fish and amphibians generally lay eggs which are surrounded by the extraembryonic membranes but do not develop a shell, hard or soft, around these membranes. Some fish and amphibian eggs have thick, leathery coats, especially if they must withstand physical force or desiccation. These types of eggs can also be very small and fragile.
While many reptiles lay eggs with flexible, calcified eggshells, there are some that lay hard eggs. Eggs laid by snakes generally have leathery shells which often adhere to one another. Depending on the species, turtles and tortoises lay hard or soft eggs. Several species lay eggs which are nearly indistinguishable from bird eggs.
Bird eggs
The bird egg is a fertilized gamete (or, in the case of some birds, such as chickens, possibly unfertilized) located on the yolk surface and surrounded by albumen, or egg white. The albumen in turn is surro
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
An ootheca (pl. oothecae ) is a type of egg capsule made by any member of a variety of species including mollusks (such as Turbinella laevigata), mantises, and cockroaches.
Etymology
The word is a Latinized combination of oo-, meaning "egg", from the Greek word ōon (cf. Latin ovum), and theca, meaning a "cover" or "container", from the Greek theke. Ootheke is Greek for ovary.
Structure
Oothecae are made up of structural proteins and tanning agents that cause the protein to harden around the eggs, providing protection and stability. The production of ootheca convergently evolved across numerous insect species due to a selection for protection from parasites and other forms of predation, as the complex structure of the shell casing provides an evolutionary reproductive advantage (although the fitness and lifespan also depend on other factors such as the temperature of the incubating ootheca). Oothecae are most notably found in the orders Blattodea (Cockroaches) and Mantodea (Praying mantids), as well as in the subfamilies Cassidinae (Coleoptera) and Korinninae (Phasmatodea).
Functions
The ootheca protects the eggs from microorganisms, parasitoids, predators, and weather; the ootheca maintains a stable water balance through variation in its surface, as it is porous in dry climates to protect against desiccation, and smooth in wet climates to protect against oversaturation. Its composition and appearance vary depending on species and environment.
Image gallery
See also
Sang piao xiao, mantis oothecae used in traditional Chinese medicine
Egg cases, also known as egg capsules
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Yolk is a very fragile substance found in the eggs of reptiles and needs protection. what serves as protection for the yolk?
A. eye sac
B. fish sac
C. yolk sac
D. dish sac
Answer:
|
|
sciq-11063
|
multiple_choice
|
A comet striking the earth may have caused mass extinction, this would have decreased sunlight, which would have effected what plant process reducing food?
|
[
"photosynthesis",
"glycolysis",
"fertilization",
"decomposition"
] |
A
|
Relavent Documents:
Document 0:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
Document 1:::
Agrophysics is a branch of science bordering on agronomy and physics,
whose objects of study are the agroecosystem - the biological objects, biotope and biocoenosis affected by human activity, studied and described using the methods of physical sciences. Using the achievements of the exact sciences to solve major problems in agriculture, agrophysics involves the study of materials and processes occurring in the production and processing of agricultural crops, with particular emphasis on the condition of the environment and the quality of farming materials and food production.
Agrophysics is closely related to biophysics, but is restricted to the physics of the plants, animals, soil and an atmosphere involved in agricultural activities and biodiversity. It is different from biophysics in having the necessity of taking into account the specific features of biotope and biocoenosis, which involves the knowledge of nutritional science and agroecology, agricultural technology, biotechnology, genetics etc.
The needs of agriculture, concerning the past experience study of the local complex soil and next plant-atmosphere systems, lay at the root of the emergence of a new branch – agrophysics – dealing this with experimental physics.
The scope of the branch starting from soil science (physics) and originally limited to the study of relations within the soil environment, expanded over time onto influencing the properties of agricultural crops and produce as foods and raw postharvest materials, and onto the issues of quality, safety and labeling concerns, considered distinct from the field of nutrition for application in food science.
Research centres focused on the development of the agrophysical sciences include the Institute of Agrophysics, Polish Academy of Sciences in Lublin, and the Agrophysical Research Institute, Russian Academy of Sciences in St. Petersburg.
See also
Agriculture science
Agroecology
Genomics
Metagenomics
Metabolomics
Physics (Aristotle)
Proteomi
Document 2:::
Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995).
Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994).
History of the study of plant tolerance
Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th
Document 3:::
The Park Grass Experiment is a biological study originally set up to test the effect of fertilizers and manures on hay yields. The scientific experiment is located at the Rothamsted Research in the English county of Hertfordshire, and is notable as one of the longest-running experiments of modern science, as it was initiated in 1856 and has been continually monitored ever since.
The experiment was originally designed to answer agricultural questions but has since proved an invaluable resource for studying natural selection and biodiversity. The treatments under study were found to be affecting the botanical make-up of the plots and the ecology of the field and it has been studied ever since. In spring, the field is a colourful tapestry of flowers and grasses, some plots still having the wide range of plants that most meadows probably contained hundreds of years ago.
Over its history, Park Grass has:
demonstrated that conventional field trials probably underestimate threats to plant biodiversity from long term changes, such as soil acidification,
shown how plant species richness, biomass and pH are related,
demonstrated that competition between plants can make the effects of climatic variation on communities more extreme,
provided one of the first demonstrations of local evolutionary change under different selection pressures and
endowed us with an archive of soil and hay samples that have been used to track the history of atmospheric pollution, including nuclear fallout.
Bibliography
Rothamsted Research: Classical Experiments
Biodiversity
Ecological experiments
Grasslands
Document 4:::
Plants depend on epigenetic processes for proper function. Epigenetics is defined as "the study of changes in gene function that are mitotically and/or meiotically heritable and that do not entail a change in DNA sequence" (Wu et al. 2001). The area of study examines protein interactions with DNA and its associated components, including histones and various other modifications such as methylation, which alter the rate or target of transcription. Epi-alleles and epi-mutants, much like their genetic counterparts, describe changes in phenotypes due to epigenetic mechanisms. Epigenetics in plants has attracted scientific enthusiasm because of its importance in agriculture.
Background and history
In the past, macroscopic observations on plants led to basic understandings of how plants respond to their environments and grow. While these investigations could somewhat correlate cause and effect as a plant develops, they could not truly explain the mechanisms at work without inspection at the molecular level.
Certain studies provided simplistic models with the groundwork for further exploration and eventual explanation through epigenetics. In 1918, Gassner published findings that noted the necessity of a cold phase in order for proper plant growth. Meanwhile, Garner and Allard examined the importance of the duration of light exposure to plant growth in 1920. Gassner's work would shape the conceptualization of vernalization which involves epigenetic changes in plants after a period of cold that leads to development of flowering (Heo and Sung et al. 2011). In a similar manner, Garner and Allard's efforts would gather an awareness of photoperiodism which involves epigenetic modifications following the duration of nighttime which enable flowering (Sun et al. 2014). Rudimentary comprehensions set precedent for later molecular evaluation and, eventually, a more complete view of how plants operate.
Modern epigenetic work depends heavily on bioinformatics to gather large quant
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A comet striking the earth may have caused mass extinction, this would have decreased sunlight, which would have effected what plant process reducing food?
A. photosynthesis
B. glycolysis
C. fertilization
D. decomposition
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.