id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-2750
|
multiple_choice
|
How many sperm does it take to fertilize an egg?
|
[
"one",
"two",
"ten",
"five"
] |
A
|
Relavent Documents:
Document 0:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many sperm does it take to fertilize an egg?
A. one
B. two
C. ten
D. five
Answer:
|
|
sciq-4357
|
multiple_choice
|
What is the medical term for a condition caused by abnormalities, such as mutations, in your genes or chromosomes?
|
[
"genetic disorder",
"mutations disorder",
"radiation disorder",
"nervous disorder"
] |
A
|
Relavent Documents:
Document 0:::
Medical genetics is the branch of medicine that involves the diagnosis and management of hereditary disorders. Medical genetics differs from human genetics in that human genetics is a field of scientific research that may or may not apply to medicine, while medical genetics refers to the application of genetics to medical care. For example, research on the causes and inheritance of genetic disorders would be considered within both human genetics and medical genetics, while the diagnosis, management, and counselling people with genetic disorders would be considered part of medical genetics.
In contrast, the study of typically non-medical phenotypes such as the genetics of eye color would be considered part of human genetics, but not necessarily relevant to medical genetics (except in situations such as albinism). Genetic medicine is a newer term for medical genetics and incorporates areas such as gene therapy, personalized medicine, and the rapidly emerging new medical specialty, predictive medicine.
Scope
Medical genetics encompasses many different areas, including clinical practice of physicians, genetic counselors, and nutritionists, clinical diagnostic laboratory activities, and research into the causes and inheritance of genetic disorders. Examples of conditions that fall within the scope of medical genetics include birth defects and dysmorphology, intellectual disabilities, autism, mitochondrial disorders, skeletal dysplasia, connective tissue disorders, cancer genetics, and prenatal diagnosis. Medical genetics is increasingly becoming relevant to many common diseases. Overlaps with other medical specialties are beginning to emerge, as recent advances in genetics are revealing etiologies for morphologic, endocrine, cardiovascular, pulmonary, ophthalmologist, renal, psychiatric, and dermatologic conditions. The medical genetics community is increasingly involved with individuals who have undertaken elective genetic and genomic testing.
Subspecialties
In som
Document 1:::
In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene.
Mutants arise by mutation
Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone.
Etymology
Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change".
Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a devel
Document 2:::
Disorders of sex development (DSDs), also known as differences in sex development, diverse sex development and variations in sex characteristics (VSC), are congenital conditions affecting the reproductive system, in which development of chromosomal, gonadal, or anatomical sex is atypical.
DSDs are subdivided into groups in which the labels generally emphasize the karyotype's role in diagnosis: 46,XX; 46,XY; sex chromosome; XX, sex reversal; ovotesticular disorder; and XY, sex reversal.
Overview
DSDs are medical conditions encompassing any problem noted at birth where the genitalia are atypical in relation to the chromosomes or gonads. There are several types of DSDs and their effect on the external and internal reproductive organs varies greatly.
A frequently-used social and medical adjective for people with DSDs is "intersex". Urologists were concerned that terms like intersex, hermaphrodite, and pseudohermaphrodite were confusing and pejorative. This led to the Chicago Consensus, recommending a new terminology based on the umbrella term disorders of sex differentiation.
DSDs are divided into following categories, emphasizing the karyotype's role in diagnosis:
46,XX DSD: Genetic Female Sex Chromosomes. Mainly virilized females as a result of congenital adrenal hyperplasia (CAH) and girls with aberrant ovarian development.
46,XY DSD: Genetic Male Sex Chromosomes. Individuals with abnormal testicular differentiation, defects in testosterone biosynthesis, and impaired testosterone action.
Sex chromosome DSD: patients with sex chromosome aneuploidy or mosaic sex karyotypes. This includes patients with Turner Syndrome (45,X or 45,X0) and Klinefelter Syndrome (47,XXY) even though they do not generally present with atypical genitals.
XX, Sex reversal: consist of two groups of patients with male phenotypes, the first with translocated SRY and the second with no SRY gene.
Ovotesticular disorder: patients having both ovarian and testicular tissue. In some cases the
Document 3:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Document 4:::
Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization.
Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others.
Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the medical term for a condition caused by abnormalities, such as mutations, in your genes or chromosomes?
A. genetic disorder
B. mutations disorder
C. radiation disorder
D. nervous disorder
Answer:
|
|
sciq-515
|
multiple_choice
|
What are the two main types of sedimentary rocks?
|
[
"clastic and chemical",
"sandstone and shale",
"shale and limestone",
"basalt and dolomite"
] |
A
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 2:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
Document 3:::
Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline.
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555.
Geology
Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope." It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable.
Perthite is either microcline or orthoclase with thin lamellae of exsolved albite.
Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region.
The largest documented single crystals of microcline were found in Devils Hole Beryl Mine, Colorado, US and measured ~50x36x14 m. This could be one of the largest crystals of any material found so far.
Microcline is commonly used for the manufacturing of porcelain.
As food additive
The chemical compound name is potassium aluminium silicate, and it
Document 4:::
Clathrate hydrates, or gas hydrates, clathrates, or hydrates, are crystalline water-based solids physically resembling ice, in which small non-polar molecules (typically gases) or polar molecules with large hydrophobic moieties are trapped inside "cages" of hydrogen bonded, frozen water molecules. In other words, clathrate hydrates are clathrate compounds in which the host molecule is water and the guest molecule is typically a gas or liquid. Without the support of the trapped molecules, the lattice structure of hydrate clathrates would collapse into conventional ice crystal structure or liquid water. Most low molecular weight gases, including , , , , , , , , and , as well as some higher hydrocarbons and freons, will form hydrates at suitable temperatures and pressures. Clathrate hydrates are not officially chemical compounds, as the enclathrated guest molecules are never bonded to the lattice. The formation and decomposition of clathrate hydrates are first order phase transitions, not chemical reactions. Their detailed formation and decomposition mechanisms on a molecular level are still not well understood.
Clathrate hydrates were first documented in 1810 by Sir Humphry Davy who found that water was a primary component of what was earlier thought to be solidified chlorine.
Clathrates have been found to occur naturally in large quantities. Around 6.4 trillion () tonnes of methane is trapped in deposits of methane clathrate on the deep ocean floor. Such deposits can be found on the Norwegian continental shelf in the northern headwall flank of the Storegga Slide. Clathrates can also exist as permafrost, as at the Mallik gas hydrate site in the Mackenzie Delta of northwestern Canadian Arctic. These natural gas hydrates are seen as a potentially vast energy resource and several countries have dedicated national programs to develop this energy resource. Clathrate hydrate has also been of great interest as technology enabler for many applications like seawater desalina
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the two main types of sedimentary rocks?
A. clastic and chemical
B. sandstone and shale
C. shale and limestone
D. basalt and dolomite
Answer:
|
|
sciq-1461
|
multiple_choice
|
What are the three main types of rocks?
|
[
"limestone , igneous and metamorphic",
"plutonic, igneous and metmorphic",
"crystalline , igneous and metamorphic",
"sedimentary, igneous and metamorphic"
] |
D
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
Document 2:::
Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline.
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555.
Geology
Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope." It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable.
Perthite is either microcline or orthoclase with thin lamellae of exsolved albite.
Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region.
The largest documented single crystals of microcline were found in Devils Hole Beryl Mine, Colorado, US and measured ~50x36x14 m. This could be one of the largest crystals of any material found so far.
Microcline is commonly used for the manufacturing of porcelain.
As food additive
The chemical compound name is potassium aluminium silicate, and it
Document 3:::
Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate).
Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions.
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep.
This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle.
Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Characteristics
Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure r
Document 4:::
Aluminium silicate (or aluminum silicate) is a name commonly applied to chemical compounds which are derived from aluminium oxide, Al2O3 and silicon dioxide, SiO2 which may be anhydrous or hydrated, naturally occurring as minerals or synthetic. Their chemical formulae are often expressed as xAl2O3·ySiO2·zH2O. It is known as E number E559.
Main representatives
Andalusite, kyanite, and sillimanite are the principal aluminium silicate minerals. The triple point of the three polymorphs is located at a temperature of and a pressure of . These three minerals are commonly used as index minerals in metamorphic rocks.
Al2SiO5, (Al2O3·SiO2), which occurs naturally as the minerals andalusite, kyanite and sillimanite which have distinct crystal structures.
Al2Si2O7, (Al2O3·2SiO2), called metakaolinite, formed from kaolin by heating at .
Al6Si2O13, (3Al2O3·2SiO2), the mineral mullite, the only thermodynamically stable intermediate phase in the Al2O3-SiO2 system at atmospheric pressure. This also called '3:2 mullite' to distinguish it from 2Al2O3·SiO2, Al4SiO8 '2:1 mullite'.
2Al2O3·SiO2, Al4SiO8 '2:1 mullite'.
The above list mentions ternary materials (Si-Al-O). Kaolinite is a quaternary material (Si-Al-O-H). Also called aluminium silicate dihydrate, kaolinite occurs naturally as a mineral. Its formula is Al2Si2O5(OH)4, (Al2O3·2SiO2·2H2O).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the three main types of rocks?
A. limestone , igneous and metamorphic
B. plutonic, igneous and metmorphic
C. crystalline , igneous and metamorphic
D. sedimentary, igneous and metamorphic
Answer:
|
|
sciq-10855
|
multiple_choice
|
Consumers are organisms that depend on other organisms for what?
|
[
"food",
"reproduction",
"shelter",
"knowledge"
] |
A
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 2:::
A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host.
Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins.
A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi
Document 3:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 4:::
Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Consumers are organisms that depend on other organisms for what?
A. food
B. reproduction
C. shelter
D. knowledge
Answer:
|
|
sciq-4637
|
multiple_choice
|
What is the term for materials that have been left behind by organisms that once lived?
|
[
"skulls",
"detritis",
"bones",
"fossils"
] |
D
|
Relavent Documents:
Document 0:::
A zoological specimen is an animal or part of an animal preserved for scientific use.
Various uses are: to verify the identity of a (species), to allow study, increase public knowledge of zoology.
Zoological specimens are extremely diverse. Examples are bird and mammal study skins, mounted specimens, skeletal material, casts, pinned insects, dried material, animals preserved in liquid preservatives, and microscope slides.
Natural history museums are repositories of zoological specimens
Study skins
Bird and mammal specimens are conserved as dry study skins, a form of taxidermy. The skin is removed from the animal's carcass, treated with absorbents, and filled with cotton or polyester batting (In the past plant fibres or sawdust were used). Bird specimens have a long, thin, wooden dowel wrapped in batting at their center. The dowel is often intentionally longer than the bird's body and exits at the animal's vent. This exposed dowel provides a place to handle the bird without disturbing the feathers. Mammal study skins do not normally utilize wooden dowels, instead preparators use wire to support the legs and tail of mammals. Labels are attached to a leg of the specimen with thread or string. Heat and chemicals are sometimes used to aid the drying of study skins.
Skeletal Preparations (Osteology)
Osteological collections consist of cleaned, complete and partial skeletons, crania of Vertebrates, mainly birds and mammals. They are used in studies of comparative anatomy and to identify bones from archaeological sites. Human bones are used in medical and forensic studies.
Molluscs
In museum collections it is common for the dry material to greatly exceed the amount of material that is preserved in alcohol. The shells minus their soft parts are kept in card trays within drawers or in glass tubes, often as lots (a lot is a collection of a single species taken from a single locality on a single occasion). Shell collections sometimes suffer from Byne's disease which also
Document 1:::
Biostratinomy is the study of the processes that take place after an organism dies but before its final burial. It is considered to be a subsection of the science of taphonomy, along with necrology (the study of the death of an organism) and diagenesis (the changes that take place after final burial). These processes are largely destructive, and include physical, chemical and biological effects:
Physical effects non-exhaustively include transport, breakage and exhumation.
Chemical effects include early changes in mineralogy and oxidation.
Biological effects include decay, scavenging, bioturbation, encrustation and boring.
For the vast majority of organisms, biostratinomic destruction is total. However, if at least a few remnants of an organism make it to final burial, a fossil may eventually be formed unless destruction is completed by diagenesis. As the processes of biostratinomy are often dominated by sedimentological factors, analysis of the biostratinomy of a fossil can reveal important features about the physical environment it once lived in. The boundaries between the three disciplines within taphonomy are partly arbitrary. In particular, the role of microbes in sealing and preserving organisms, for example in a process called autolithification, is now recognised to be a very important and early event in the preservation of many exceptional fossils, often taking place before burial. Such mineralogical changes might equally be considered to be biostratinomic as diagenetic.
A school of investigation called aktuopaläontologie, subsisting largely in Germany, attempts to investigate biostratinomic effects by experimentation and observation on extant organisms. William Schäfer's book "Ecology and palaeoecology of marine environments" is a classic product of this sort of investigation. More recently, D.E.G. Briggs and colleagues have made detailed studies of decay with the prime aim of understanding the profound halt to these processes that is required by exce
Document 2:::
Thanatocoenosis (from Greek language thanatos - death and koinos - common) are all the embedded fossils at a single discovery site. This site may be referred to as a "death assemblage". Such groupings are composed of fossils of organisms which may not have been associated during life, often originating from different habitats. Examples include marine fossils having been brought together by a water current or animal bones having been deposited by a predator. A site containing thanatocoenosis elements can also lose clarity in its faunal history by more recent intruding factors such as burrowing microfauna or stratigraphic disturbances born from anthropogenic methods.
This term differs from a related term, biocoenosis, which refers to an assemblage in which all organisms within the community interacted and lived together in the same habitat while alive. A biocoenosis can lead to a thanatocoenosis if disrupted significantly enough to have its dead/fossilized matter scattered. A death community/thanatocoenosis is developed by multiple taphonomic processes (those being ones relating to the different ways in which organismal remains pass through strata and are decomposed and preserved) that are generally categorized into two groups: biostratinomy and diagenesis. As a whole, thanatocoenoses are divided into two categories as well: autochthonous and allochthonous.
Death assemblages and thanatocoenoses can provide insight into the process of early-stage fossilization, as well as information about the species within a given ecosystem. The study of taphonomy can aid in furthering the understanding of the ecological past of species and their fossil records if used in conjunction with research on death assemblages from modern ecosystems.
History
The term "thanatocoenosis" was originally created by Erich Wasmund in 1926, and he was the first to define both the similarities and contrasts between these death communities and biocoenoses. Due to confusion between some distinctions
Document 3:::
Paleo-inspiration is a paradigm shift that leads scientists and designers to draw inspiration from ancient materials (from art, archaeology, natural history or paleo-environments) to develop new systems or processes, particularly with a view to sustainability.
Paleo-inspiration has already contributed to numerous applications in fields as varied as green chemistry, the development of new artist materials, composite materials, microelectronics, and construction materials.
Semantics and definitions
While this type of application has been known for a long time, the concept itself was coined by teams from the French National Centre for Scientific Research, the Massachusetts Institute of Technology and the Bern University of Applied Sciences from the term Bioinspiration. They published the concept in a seminal paper published online in 2017 by the journal Angewandte Chemie.
Different names have been used to designate the corresponding systems, in particular: paleo-inspired, antiqua-inspired, antiquity-inspired or archaeomimetic. The use of these different names illustrates the extremely large time gap between the sources of inspiration, from millions of years ago when considering palaeontological systems and fossils, to much more recent archaeological or artistic material systems.
Properties sought
Distinct physico-chemical and mechanical properties are sought.
They may concern intrinsic properties of the paleo-inspired materials:
durability (materials found in certain contexts, having resisted alteration in these environments) and resistance to corrosion or alteration
electronic or magnetic properties
optical properties (especially from pigments or dyes, materials used for ceramic manufacture)
They can also concern processes:
processes with low energy or resource consumption, with a view to chemical processes favouring sustainable development
soft chemistry processes
The paleo-inspired approach
This approach combines several key stages.
Observation: T
Document 4:::
In biology, a biofact is dead material of a once-living organism.
In 1943, the protozoologist Bruno M. Klein of Vienna (1891–1968) coined the term in his article Biofakt und Artefakt in the microscopy journal Mikrokosmos, though at that time it was not adopted by the scientific community. Klein's concept of biofact stressed the dead materials produced by living organisms as sheaths, such as shells.
The word biofact is now widely used in the zoo/aquarium world, but was first used by Lisbeth Bornhofft in 1993 in the Education Department at the New England Aquarium, Boston, to refer to preserved items such as animal bones, skins, molts and eggs. The Accreditation Standards and Related Policies of the Association of Zoos and Aquariums states that biofacts can be useful education tools, and are preferable to live animals because of potential ethical considerations.
See also
Biofact (archaeology)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for materials that have been left behind by organisms that once lived?
A. skulls
B. detritis
C. bones
D. fossils
Answer:
|
|
sciq-1931
|
multiple_choice
|
Radiotherapy is effective against cancer because cancer cells reproduce rapidly and, consequently, are more sensitive to this?
|
[
"UV light",
"radiation",
"separation",
"destruction"
] |
B
|
Relavent Documents:
Document 0:::
Radiosensitivity is the relative susceptibility of cells, tissues, organs or organisms to the harmful effect of ionizing radiation.
Cells types affected
Cells are least sensitive when in the S phase, then the G1 phase, then the G2 phase, and most sensitive in the M phase of the cell cycle. This is described by the 'law of Bergonié and Tribondeau', formulated in 1906: X-rays are more effective on cells which have a greater reproductive activity.
From their observations, they concluded that quickly dividing tumor cells are generally more sensitive than the majority of body cells. This is not always true. Tumor cells can be hypoxic and therefore less sensitive to X-rays because most of their effects are mediated by the free radicals produced by ionizing oxygen.
It has meanwhile been shown that the most sensitive cells are those that are undifferentiated, well nourished, dividing quickly and highly active metabolically. Amongst the body cells, the most sensitive are spermatogonia and erythroblasts, epidermal stem cells, gastrointestinal stem cells. The least sensitive are nerve cells and muscle fibers.
Very sensitive cells are also oocytes and lymphocytes, although they are resting cells and do not meet the criteria described above. The reasons for their sensitivity are not clear.
There also appears to be a genetic basis for the varied vulnerability of cells to ionizing radiation. This has been demonstrated across several cancer types and in normal tissues.
Cell damage classification
The damage to the cell can be lethal (the cell dies) or sublethal (the cell can repair itself). Cell damage can ultimately lead to health effects which can be classified as either Tissue Reactions or Stochastic Effects according to the International Commission on Radiological Protection.
Tissue reactions
Tissue reactions have a threshold of irradiation under which they do not appear and above which they typically appear. Fractionation of dose, dose rate, the application of antioxidan
Document 1:::
absorbed dose
Electromagnetic radiation
equivalent dose
hormesis
Ionizing radiation
Louis Harold Gray (British physicist)
rad (unit)
radar
radar astronomy
radar cross section
radar detector
radar gun
radar jamming
(radar reflector) corner reflector
radar warning receiver
(Radarange) microwave oven
radiance
(radiant: see) meteor shower
radiation
Radiation absorption
Radiation acne
Radiation angle
radiant barrier
(radiation belt: see) Van Allen radiation belt
Radiation belt electron
Radiation belt model
Radiation Belt Storm Probes
radiation budget
Radiation burn
Radiation cancer
(radiation contamination) radioactive contamination
Radiation contingency
Radiation damage
Radiation damping
Radiation-dominated era
Radiation dose reconstruction
Radiation dosimeter
Radiation effect
radiant energy
Radiation enteropathy
(radiation exposure) radioactive contamination
Radiation flux
(radiation gauge: see) gauge fixing
radiation hardening
(radiant heat) thermal radiation
radiant heating
radiant intensity
radiation hormesis
radiation impedance
radiation implosion
Radiation-induced lung injury
Radiation Laboratory
radiation length
radiation mode
radiation oncologist
radiation pattern
radiation poisoning (radiation sickness)
radiation pressure
radiation protection (radiation shield) (radiation shielding)
radiation resistance
Radiation Safety Officer
radiation scattering
radiation therapist
radiation therapy (radiotherapy)
(radiation treatment) radiation therapy
(radiation units: see) :Category:Units of radiation dose
(radiation weight factor: see) equivalent dose
radiation zone
radiative cooling
radiative forcing
radiator
radio
(radio amateur: see) amateur radio
(radio antenna) antenna (radio)
radio astronomy
radio beacon
(radio broadcasting: see) broadcasting
radio clock
(radio communications) radio
radio control
radio controlled airplane
radio controlled car
radio-controlled helicopter
radio control
Document 2:::
Dose fractionation effects are utilised in the treatment of cancer with radiation therapy. When the total dose of radiation is divided into several, smaller doses over a period of several days, there are fewer toxic effects on healthy cells. This maximizes the effect of radiation on cancer and minimizes the negative side effects. A typical fractionation scheme divides the dose into 30 units delivered every weekday over six weeks.
Background
Experiments in radiation biology have found that as the absorbed dose of radiation increases, the number of cells which survive decreases. They have also found that if the radiation is fractionated into smaller doses, with one or more rest periods in between, fewer cells die. This is because of self-repair mechanisms which repair the damage to DNA and other biomolecules such as proteins. These mechanisms can be over expressed in cancer cells, so caution should be used in using results for a cancer cell line to make predictions for healthy cells if the cancer cell line is known to be resistant to cytotoxic drugs such as cisplatin. The DNA self repair processes in some organisms is exceptionally good; for instance, the bacterium Deinococcus radiodurans can tolerate a 15 000 Gy (1.5 MRad) dose.
In the graph to the right, called a cell survival curve, the dose vs. surviving fraction have been drawn for a hypothetical group of cells with and without a rest time for the cells to recover. Other than the recovery time partway through the irradiation, the cells would have been treated identically.
The human body contains many types of cells, and the human can be killed by the loss of a single type of cell in a vital organ. For many short-term radiation deaths due to what is commonly known as radiation sickness (3 to 30 days after exposure), it is the loss of bone marrow cells (which produce blood cells), and the loss of other cells in the wall of the intestines, that is fatal.
Radiation fractionation as cancer treatment
Fractionatio
Document 3:::
A microbeam is a narrow beam of radiation, of micrometer or sub-micrometer dimensions. Together with integrated imaging techniques, microbeams allow precisely defined quantities of damage to be introduced at precisely defined locations. Thus, the microbeam is a tool for investigators to study intra- and inter-cellular mechanisms of damage signal transduction.
A schematic of microbeam operation is shown on the right. Essentially, an automated imaging system locates user-specified targets, and these targets are sequentially irradiated, one by one, with a highly-focused radiation beam. Targets can be single cells, sub-cellular locations, or precise locations in 3D tissues. Key features of a microbeam are throughput, precision, and accuracy. While irradiating targeted regions, the system must guarantee that adjacent locations receive no energy deposition.
History
The first microbeam facilities were developed in the mid-90s. These facilities were a response to challenges in studying radiobiological processes using broadbeam exposures. Microbeams were originally designed to address two main issues:
The belief that the radiation-sensitivity of the nucleus was not uniform, and
The need to be able to hit an individual cell with an exact number (particularly one) of particles for low dose risk assessment.
Additionally, microbeams were seen as ideal vehicles to investigate the mechanisms of radiation response.
Radiation-sensitivity of the cell
At the time it was believed that radiation damage to cells was entirely the result of damage to DNA. Charged particle microbeams could probe the radiation sensitivity of the nucleus, which at the time appeared not to be uniformly sensitive. Experiments performed at microbeam facilities have since shown the existence of a bystander effect. A bystander effect is any biological response to radiation in cells or tissues that did not experience a radiation traversal. These "bystander" cells are neighbors of cells that have experience
Document 4:::
Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged.
Sub-specialties
There are many sub-specialties in the field of health physics, including
Ionising radiation instrumentation and measurement
Internal dosimetry and external dosimetry
Radioactive waste management
Radioactive contamination, decontamination and decommissioning
Radiological engineering (shielding, holdup, etc.)
Environmental assessment, radiation monitoring and radon evaluation
Operational radiation protection/health physics
Particle accelerator physics
Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team)
Industrial uses of radioactive material
Medical health physics
Public information and communication involving radioactive materials
Biological effects/radiation biology
Radiation standards
Radiation risk analysis
Nuclear power
Radioactive materials and homeland security
Radiation protection
Nanotechnology
Operational health physics
The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Radiotherapy is effective against cancer because cancer cells reproduce rapidly and, consequently, are more sensitive to this?
A. UV light
B. radiation
C. separation
D. destruction
Answer:
|
|
sciq-8623
|
multiple_choice
|
Chordates include vertebrates and invertebrates that have what?
|
[
"a notochord",
"a endoderm",
"chordate",
"a phloem"
] |
A
|
Relavent Documents:
Document 0:::
N. europaea shows short rods with pointed ends cells, which size is (0.8-1.1 x 1.0- 1.7) µm; motility has not been observed.
N. eutropha presents rod to pear shaped cells with one or both ends pointed, with a size of (1.0-1.3 x 1.6- 2.3) µm. They show motility.
N. halophila cells have a coccoid shap
Document 1:::
Caenorhabditis elegans () is a free-living transparent nematode about 1 mm in length that lives in temperate soil environments. It is the type species of its genus. The name is a blend of the Greek caeno- (recent), rhabditis (rod-like) and Latin elegans (elegant). In 1900, Maupas initially named it Rhabditides elegans. Osche placed it in the subgenus Caenorhabditis in 1952, and in 1955, Dougherty raised Caenorhabditis to the status of genus.
C. elegans is an unsegmented pseudocoelomate and lacks respiratory or circulatory systems. Most of these nematodes are hermaphrodites and a few are males. Males have specialised tails for mating that include spicules.
In 1963, Sydney Brenner proposed research into C. elegans, primarily in the area of neuronal development. In 1974, he began research into the molecular and developmental biology of C. elegans, which has since been extensively used as a model organism. It was the first multicellular organism to have its whole genome sequenced, and in 2019 it was the first organism to have its connectome (neuronal "wiring diagram") completed.
Anatomy
C. elegans is unsegmented, vermiform, and bilaterally symmetrical. It has a cuticle (a tough outer covering, as an exoskeleton), four main epidermal cords, and a fluid-filled pseudocoelom (body cavity). It also has some of the same organ systems as larger animals. About one in a thousand individuals is male and the rest are hermaphrodites. The basic anatomy of C. elegans includes a mouth, pharynx, intestine, gonad, and collagenous cuticle. Like all nematodes, they have neither a circulatory nor a respiratory system. The four bands of muscles that run the length of the body are connected to a neural system that allows the muscles to move the animal's body only as dorsal bending or ventral bending, but not left or right, except for the head, where the four muscle quadrants are wired independently from one another. When a wave of dorsal/ventral muscle contractions proceeds from the back
Document 2:::
Ciona intestinalis (sometimes known by the common name of vase tunicate) is an ascidian (sea squirt), a tunicate with very soft tunic. Its Latin name literally means "pillar of intestines", referring to the fact that its body is a soft, translucent column-like structure, resembling a mass of intestines sprouting from a rock. It is a globally distributed cosmopolitan species. Since Linnaeus described the species, Ciona intestinalis has been used as a model invertebrate chordate in developmental biology and genomics. Studies conducted between 2005 and 2010 have shown that there are at least two, possibly four, sister species. More recently it has been shown that one of these species has already been described as Ciona robusta. By anthropogenic means, the species has invaded various parts of the world and is known as an invasive species.
Although Linnaeus first categorised this species as a kind of mollusk, Alexander Kovalevsky found a tadpole-like larval stage during development that shows similarity to vertebrates. Recent molecular phylogenetic studies as well as phylogenomic studies support that sea squirts are the closest invertebrate relatives of vertebrates. Its full genome has been sequenced using a specimen from Half Moon Bay in California, US, showing a very small genome size, less than 1/20 of the human genome, but having a gene corresponding to almost every family of genes in vertebrates.
Description
Ciona intestinalis is a solitary tunicate with a cylindrical, soft, gelatinous body, up to long. The body colour and colour at the distal end of siphons are major external characters distinguishing sister species within the species complex.
The body of Ciona is bag-like and covered by a tunic, which is a secretion of the epidermal cells. The body is attached by a permanent base located at the posterior end, while the opposite extremity has two openings, the buccal and atrial siphons. Water is drawn into the ascidian through the buccal (oral) siphon and l
Document 3:::
The polypide in bryozoans encompasses most of the organs and tissues of each individual zooid. This includes the tentacles, tentacle sheath, U-shaped digestive tract, musculature and nerve cells. It is housed in the zooidal exoskeleton, which in cyclostomes is tubular and in cheilostomes is box-shaped.
See also
Bryozoan Anatomy
Document 4:::
The evolution of nervous systems dates back to the first development of nervous systems in animals (or metazoans). Neurons developed as specialized electrical signaling cells in multicellular animals, adapting the mechanism of action potentials present in motile single-celled and colonial eukaryotes. Primitive systems, like those found in protists, use chemical signalling for movement and sensitivity; data suggests these were precursors to modern neural cell types and their synapses. When some animals started living a mobile lifestyle and eating larger food particles externally, they developed ciliated epithelia, contractile muscles and coordinating & sensitive neurons for it in their outer layer.
Simple nerve nets seen in acoels (basal bilaterians) and cnidarians are thought to be the ancestral condition for the Planulozoa (bilaterians plus cnidarians and, perhaps, placozoans). A more complex nerve net with simple nerve cords is present in ancient animals called ctenophores but no nerves, thus no nervous systems, are present in another group of ancient animals, the sponges (Porifera). Due to the common presence and similarity of some neural genes in these ancient animals and their protist relatives, the controversy of whether ctenophores or sponges diverged earlier, and the recent discovery of "neuroid" cells specialized in coordination of digestive choanocytes in Spongilla, the origin of neurons in the phylogenetic tree of life is still disputed. Further cephalization and nerve cord (ventral and dorsal) evolution occurred many times independently in bilaterians.
Neural precursors
Action potentials, which are necessary for neural activity, evolved in single-celled eukaryotes. These use calcium rather than sodium action potentials, but the mechanism was probably adapted into neural electrical signaling in multicellular animals. In some colonial eukaryotes, such as Obelia, electrical signals propagate not only through neural nets, but also through epithelial cells
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Chordates include vertebrates and invertebrates that have what?
A. a notochord
B. a endoderm
C. chordate
D. a phloem
Answer:
|
|
sciq-10853
|
multiple_choice
|
Why does a large log burn relatively slowly compared to the same mass of wood in the form of small twigs?
|
[
"larger surface area",
"larger surface area",
"smaller blade area",
"smaller surface area"
] |
D
|
Relavent Documents:
Document 0:::
Wood science, commonly referred to as wood sciences, is a scientific discipline that predominantly investigates elements associated with the formation, composition and macro- and microstructure of wood. It additionally delves into the biological, chemical, physical, and mechanical properties and characteristics of wood, as a natural lignocellulosic material.
A deep understanding of wood plays a pivotal role in various endeavors, such as the processing of wood, the production of wood-based materials like particleboard, fiberboard, OSB, plywood and other materials, as well as the utilization of wood and wood-based materials in construction and a wide array of products, including pulpwood, furniture, engineered wood products such as glued laminated timber, CLT, LVL, PSL, as well as pellets, briquettes, and numerous other products.
History
Initial comprehensive investigations in the field of wood science emerged at the start of the 20th century. The advent of contemporary wood research commenced in 1910, when the Forest Products Laboratory (FPL) was established in Madison, Wisconsin, USA. The Forest Products Laboratory played a fundamental role in wood science providing scientific research on wood and wood products in partnership with academia, industry, local and other institutions in North and South America and worldwide.
In the following years, many wood research institutes came into existence across almost all industrialized nations. A general overview of these institutes and laboratories is shown below:
1913: Institute of Wood and Pulp Chemistry Eberswalde (today's Eberswalde University for Sustainable Development), Germany
1913: Forest Products Laboratory Montreal, Canada
1918: Forest Products Laboratory Vancouver, Canada
1919: Forest Products Laboratory Melbourne, Australia
1923: Forest Products Research Laboratory, Princes Risborough, Great Britain
1929: Institute for Wood Science and Technology, Leningrant, St. Petersburg, USSR
1933: Centre Technique
Document 1:::
Wildfire modeling is concerned with numerical simulation of wildfires to comprehend and predict fire behavior. Wildfire modeling aims to aid wildfire suppression, increase the safety of firefighters and the public, and minimize damage. Wildfire modeling can also aid in protecting ecosystems, watersheds, and air quality.
Using computational science, wildfire modeling involves the statistical analysis of past fire events to predict spotting risks and front behavior. Various wildfire propagation models have been proposed in the past, including simple ellipses and egg- and fan-shaped models. Early attempts to determine wildfire behavior assumed terrain and vegetation uniformity. However, the exact behavior of a wildfire's front is dependent on a variety of factors, including wind speed and slope steepness. Modern growth models utilize a combination of past ellipsoidal descriptions and Huygens' Principle to simulate fire growth as a continuously expanding polygon. Extreme value theory may also be used to predict the size of large wildfires. However, large fires that exceed suppression capabilities are often regarded as statistical outliers in standard analyses, even though fire policies are more influenced by large wildfires than by small fires.
Objectives
Wildfire modeling attempts to reproduce fire behavior, such as how quickly the fire spreads, in which direction, how much heat it generates. A key input to behavior modeling is the Fuel Model, or type of fuel, through which the fire is burning. Behavior modeling can also include whether the fire transitions from the surface (a "surface fire") to the tree crowns (a "crown fire"), as well as extreme fire behavior including rapid rates of spread, fire whirls, and tall well-developed convection columns. Fire modeling also attempts to estimate fire effects, such as the ecological and hydrological effects of the fire, fuel consumption, tree mortality, and amount and rate of smoke produced.
Environmental factors
Wildlan
Document 2:::
A twig is a thin, often short, branch of a tree or bush.
The buds on the twig are an important diagnostic characteristic, as are the abscission scars where the leaves have fallen away. The color, texture, and patterning of the twig bark are also important, in addition to the thickness and nature of any pith of the twig.
There are two types of twig: vegetative twigs and fruiting spurs. Fruiting spurs are specialized twigs that generally branch off the sides of branches and are stubby and slow-growing, with many annular ring markings from seasons past. The age and rate of growth of a twig can be determined by counting the winter terminal bud scale scars, or annular ring marking, across the diameter of the twig.
Twigs can be useful in starting fire. They can be used as kindling wood, bridging the gap between highly flammable tinder (dry grass and leaves) and firewood. This is due to their high amounts of stored carbon dioxide used in photosynthesis.
Document 3:::
Energy forestry is a form of forestry in which a fast-growing species of tree or woody shrub is grown specifically to provide biomass or biofuel for heating or power generation.
The two forms of energy forestry are short rotation coppice and short rotation forestry:
Short rotation coppice may include tree crops of poplar, willow or eucalyptus, grown for two to five years before harvest.
Short rotation forestry are crops of alder, ash, birch, eucalyptus, poplar, and sycamore, grown for eight to twenty years before harvest.
Benefits
The main advantage of using "grown fuels", as opposed to fossil fuels such as coal, natural gas and oil, is that while they are growing they absorb the near-equivalent in carbon dioxide (an important greenhouse gas) to that which is later released in their burning. In comparison, burning fossil fuels increases atmospheric carbon unsustainably, by using carbon that was added to the Earth's carbon sink millions of years ago. This is a prime contributor to climate change.
According to the FAO, compared to other energy crops, wood is among the most efficient sources of bioenergy in terms of quantity of energy released by unit of carbon emitted. Other advantages of generating energy from trees, as opposed to agricultural crops, are that trees do not have to be harvested each year, the harvest can be delayed when market prices are down, and the products can fulfil a variety of end-uses.
Yields of some varieties can be as high as 11 oven dry tonnes per hectare every year. However, commercial experience on plantations in Scandinavia have shown lower yield rates.
These crops can also be used in bank stabilisation and phytoremediation. In fact, experiments in Sweden with willow plantations have proved to have many beneficial effects on the soil and water quality when compared to conventional agricultural crops (such as cereal). This beneficial effects have been the basis for the designed of multifunctional production systems to meet emerging b
Document 4:::
A controlled or prescribed (Rx) burn, which can include hazard reduction burning, backfire, swailing or a burn-off, is a fire set intentionally for purposes of forest management, fire suppression, farming, prairie restoration or greenhouse gas abatement. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. Fire is a natural part of both forest and grassland ecology and controlled fire can be a tool for foresters.
Hazard reduction or controlled burning is conducted during the cooler months to reduce fuel buildup and decrease the likelihood of serious hotter fires. Controlled burning stimulates the germination of some desirable forest trees, and reveals soil mineral layers which increases seedling vitality, thus renewing the forest. Some cones, such as those of lodgepole pine, sequoia and many chaparral shrubs are pyriscent, meaning heat from fire opens cones to disperse seeds.
In industrialized countries, controlled burning is usually overseen by fire control authorities for regulations and permits.
History
There are two basic causes of wildfires. One is natural, mainly through lightning, and the other is human activity. Controlled burns have a long history in wildland management. Pre-agricultural societies used fire to regulate both plant and animal life. Fire history studies have documented periodic wildland fires ignited by indigenous peoples in North America and Australia. Native Americans frequently used fire to manage natural environments in a way that benefited humans and wildlife, starting low-intensity fires that released nutrients for plants, reduced competition, and consumed excess flammable material that otherwise would eventually fuel high-intensity, catastrophic fires.
Fires, both naturally caused and prescribed, were once part of natural landscapes in many areas. In the US, these practices ended in the early 20th century, when federal fire policies were enacted with the goal of suppressing all fires. S
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Why does a large log burn relatively slowly compared to the same mass of wood in the form of small twigs?
A. larger surface area
B. larger surface area
C. smaller blade area
D. smaller surface area
Answer:
|
|
sciq-511
|
multiple_choice
|
The right ventricle pumps what type of blood toward the lungs?
|
[
"oxygen-rich",
"plasma",
"oxygen-poor",
"oxygenated"
] |
C
|
Relavent Documents:
Document 0:::
A ventricle is one of two large chambers toward the bottom of the heart that collect and expel blood towards the peripheral beds within the body and lungs. The blood pumped by a ventricle is supplied by an atrium, an adjacent chamber in the upper heart that is smaller than a ventricle. Interventricular means between the ventricles (for example the interventricular septum), while intraventricular means within one ventricle (for example an intraventricular block).
In a four-chambered heart, such as that in humans, there are two ventricles that operate in a double circulatory system: the right ventricle pumps blood into the pulmonary circulation to the lungs, and the left ventricle pumps blood into the systemic circulation through the aorta.
Structure
Ventricles have thicker walls than atria and generate higher blood pressures. The physiological load on the ventricles requiring pumping of blood throughout the body and lungs is much greater than the pressure generated by the atria to fill the ventricles. Further, the left ventricle has thicker walls than the right because it needs to pump blood to most of the body while the right ventricle fills only the lungs.
On the inner walls of the ventricles are irregular muscular columns called trabeculae carneae which cover all of the inner ventricular surfaces except that of the conus arteriosus, in the right ventricle. There are three types of these muscles. The third type, the papillary muscles, give origin at their apices to the chordae tendinae which attach to the cusps of the tricuspid valve and to the mitral valve.
The mass of the left ventricle, as estimated by magnetic resonance imaging, averages 143 g ± 38.4 g, with a range of 87–224 g.
The right ventricle is equal in size to the left ventricle and contains roughly 85 millilitres (3 imp fl oz; 3 US fl oz) in the adult. Its upper front surface is circled and convex, and forms much of the sternocostal surface of the heart. Its under surface is flattened, forming pa
Document 1:::
The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit.
The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation.
The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins.
A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung.
Structure
De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery.
Lungs
The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart.
Veins
Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the
Document 2:::
Lucien Campeau (June 20, 1927March 15, 2010) was a Canadian cardiologist. He was a full professor at the Université de Montréal. He is best known for performing the world's first transradial coronary angiogram. Campeau was one of the founding staff of the Montreal Heart Institute, joining in 1957. He is also well known for developing the Canadian Cardiovascular Society grading of angina pectoris.
Education
Campeau received his M.D. degree from the University of Laval in 1953 and completed a fellowship in Cardiology at Johns Hopkins Hospital from 1956 to 1957. He later became a professor at University of Montreal in 1961 and was one of the co-founders of the Montreal Heart Institute.
In his lifetime, Campeau was awarded the Research Achievement Award of the Canadian Cardiovascular Society. In 2004, he was named “Cardiologue émérite 2004” by the Association des cardiologues du Québec.
Document 3:::
In cardiac physiology, cardiac output (CO), also known as heart output and often denoted by the symbols , , or , is the volumetric flow rate of the heart's pumping output: that is, the volume of blood being pumped by a single ventricle of the heart, per unit time (usually measured per minute). Cardiac output (CO) is the product of the heart rate (HR), i.e. the number of heartbeats per minute (bpm), and the stroke volume (SV), which is the volume of blood pumped from the left ventricle per beat; thus giving the formula:
Values for cardiac output are usually denoted as L/min. For a healthy individual weighing 70 kg, the cardiac output at rest averages about 5 L/min; assuming a heart rate of 70 beats/min, the stroke volume would be approximately 70 mL.
Because cardiac output is related to the quantity of blood delivered to various parts of the body, it is an important component of how efficiently the heart can meet the body's demands for the maintenance of adequate tissue perfusion. Body tissues require continuous oxygen delivery which requires the sustained transport of oxygen to the tissues by systemic circulation of oxygenated blood at an adequate pressure from the left ventricle of the heart via the aorta and arteries. Oxygen delivery (DO2 mL/min) is the resultant of blood flow (cardiac output CO) times the blood oxygen content (CaO2). Mathematically this is calculated as follows: oxygen delivery = cardiac output × arterial oxygen content, giving the formula:
With a resting cardiac output of 5 L/min, a 'normal' oxygen delivery is around 1 L/min. The amount/percentage of the circulated oxygen consumed (VO2) per minute through metabolism varies depending on the activity level but at rest is circa 25% of the DO2. Physical exercise requires a higher than resting-level of oxygen consumption to support increased muscle activity. In the case of heart failure, actual CO may be insufficient to support even simple activities of daily living; nor can it increase sufficient
Document 4:::
Venous return is the rate of blood flow back to the heart. It normally limits cardiac output.
Superposition of the cardiac function curve and venous return curve is used in one hemodynamic model.
Physiology
Venous return (VR) is the flow of blood back to the heart. Under steady-state conditions, venous return must equal cardiac output (Q), when averaged over time because the cardiovascular system is essentially a closed loop. Otherwise, blood would accumulate in either the systemic or pulmonary circulations. Although cardiac output and venous return are interdependent, each can be independently regulated.
The circulatory system is made up of two circulations (pulmonary and systemic) situated in series between the right ventricle (RV) and left ventricle (LV). Balance is achieved, in large part, by the Frank–Starling mechanism. For example, if systemic venous return is suddenly increased (e.g., changing from upright to supine position), right ventricular preload increases leading to an increase in stroke volume and pulmonary blood flow. The left ventricle experiences an increase in pulmonary venous return, which in turn increases left ventricular preload and stroke volume by the Frank–Starling mechanism. In this way, an increase in venous return can lead to a matched increase in cardiac output.
Venous return curve
Hemodynamically, venous return (VR) to the heart from the venous vascular beds is determined by a pressure gradient (venous pressure - right atrial pressure) and venous resistance (RV). Therefore, increases in venous pressure or decreases in right atrial pressure or venous resistance will lead to an increase in venous return, except when changes are brought about by altered body posture. Although the above relationship is true for the hemodynamic factors that determine the flow of blood from the veins back to the heart, it is important not to lose sight of the fact that blood flow through the entire systemic circulation represents both the cardiac
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The right ventricle pumps what type of blood toward the lungs?
A. oxygen-rich
B. plasma
C. oxygen-poor
D. oxygenated
Answer:
|
|
sciq-8115
|
multiple_choice
|
Cytokinesis divides what part of the cell into two distinctive cells?
|
[
"nucleus",
"cytoplasm",
"DNA",
"cell wall"
] |
B
|
Relavent Documents:
Document 0:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
Cytochemistry is the branch of cell biology dealing with the detection of cell constituents by means of biochemical analysis and visualization techniques. This is the study of the localization of cellular components through the use of staining methods. The term is also used to describe a process of identification of the biochemical content of cells. Cytochemistry is a science of localizing chemical components of cells and cell organelles on thin histological sections by using several techniques like enzyme localization, micro-incineration, micro-spectrophotometry, radioautography, cryo-electron microscopy, X-ray microanalysis by energy-dispersive X-ray spectroscopy, immunohistochemistry and cytochemistry, etc.
Freeze Fracture Enzyme Cytochemistry
Freeze fracture enzyme cytochemistry was initially mentioned in the study of Pinto de silva in 1987. It is a technique that allows the introduction of cytochemistry into a freeze fracture cell membrane. immunocytochemistry is used in this technique to label and visualize the cell membrane's molecules. This technique could be useful in analyzing the ultrastructure of cell membranes. The combination of immunocytochemistry and freeze fracture enzyme technique, research can identify and have a better understanding of the structure and distribution of a cell membrane.
Origin
Jean Brachet's research in Brussel demonstrated the localization and relative abundance between RNA and DNA in the cells of both animals and plants opened up the door into the research of cytochemistry. The work by Moller and Holter in 1976 about endocytosis which discussed the relationship between a cell's structure and function had established the needs of cytochemical research.
Aims
Cytochemical research aims to study individual cells that may contain several cell types within a tissue. It takes a nondestructive approach to study the localization of the cell. By remaining the cell components intact, researcher are able to study the intact cell activ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Cytokinesis divides what part of the cell into two distinctive cells?
A. nucleus
B. cytoplasm
C. DNA
D. cell wall
Answer:
|
|
sciq-7125
|
multiple_choice
|
Cirrus, stratus, and cumulus are the main types of what?
|
[
"climate",
"storms",
"weather",
"clouds"
] |
D
|
Relavent Documents:
Document 0:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 1:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed.
Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.
Types
The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Cirrus, stratus, and cumulus are the main types of what?
A. climate
B. storms
C. weather
D. clouds
Answer:
|
|
scienceQA-4658
|
multiple_choice
|
Select the fish.
|
[
"shoebill",
"golden frog",
"bison",
"hammerhead shark"
] |
D
|
A hammerhead shark is a fish. It lives underwater. It has fins, not limbs.
Hammerhead sharks get their names from the shape of their heads. They have a wide, flat head and a small mouth.
A golden frog is an amphibian. It has moist skin and begins its life in water.
Frogs live near water or in damp places. Most frogs lay their eggs in water.
A bison is a mammal. It has fur and feeds its young milk.
Male bison have horns. They can use their horns to defend themselves.
A shoebill is a bird. It has feathers, two wings, and a beak.
Shoebills live in tropical East Africa. Shoebills get their name from their shoe-shaped beaks.
|
Relavent Documents:
Document 0:::
Fish intelligence is the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" as it applies to fish.
According to Culum Brown from Macquarie University, "Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ‘higher’ vertebrates including non-human primates."
Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish has the smallest ratio of all known vertebrates. At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans).
Brain
Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials.
The cerebellum of cartilaginous and bony fishes is large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to
Document 1:::
The phylogenetic classification of bony fishes is a phylogenetic classification of bony fishes and is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. The first version was published in 2013 and resolved 66 orders. The latest version (version 4) was published in 2017 and recognised 72 orders and 79 suborders.
Phylogeny
The following cladograms show the phylogeny of the Osteichthyes down to order level, with the number of families in parentheses.
The 43 orders of spiny-rayed fishes are related as follows:
Document 2:::
A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts.
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship.
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational
Document 3:::
The Digital Fish Library (DFL) is a University of California San Diego project funded by the Biological Infrastructure Initiative (DBI) of the National Science Foundation (NSF). The DFL creates 2D and 3D visualizations of the internal and external anatomy of fish obtained with magnetic resonance imaging (MRI) methods and makes these publicly available on the web.
The information core for the Digital Fish Library is generated using high-resolution MRI scanners housed at the Center for functional magnetic resonance imaging (CfMRI) multi-user facility at UC San Diego. These instruments use magnetic fields to take 3D images of animal tissues, allowing researchers to non-invasively see inside them and quantitatively describe their 3D anatomy. Fish specimens are obtained from the Marine Vertebrate Collection at Scripps Institute of Oceanography (SIO) and imaged by staff from UC San Diego's Center for Scientific Computation in Imaging (CSCI).
As of February 2010, the Digital Fish Library contains almost 300 species covering all five classes of fish, 56 of 60 orders, and close to 200 of the 521 fish families as described by Nelson, 2006. DFL imaging has also contributed to a number of published peer-reviewed scientific studies.
Digital Fish Library work has been featured in the media, including two National Geographic documentaries: Magnetic Navigator and Ultimate Shark.
Document 4:::
Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity.
Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others.
Fisheries research
Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the fish.
A. shoebill
B. golden frog
C. bison
D. hammerhead shark
Answer:
|
sciq-2723
|
multiple_choice
|
What are commonly used to control insect pests, but can have harmful effects on the environment?
|
[
"toxins",
"fertilizers",
"Herbicides",
"insecticides"
] |
D
|
Relavent Documents:
Document 0:::
Integrated pest management (IPM), also known as integrated pest control (IPC) is a broad-based approach that integrates both chemical and non-chemical practices for economic control of pests. IPM aims to suppress pest populations below the economic injury level (EIL). The UN's Food and Agriculture Organization defines IPM as "the careful consideration of all available pest control techniques and subsequent integration of appropriate measures that discourage the development of pest populations and keep pesticides and other interventions to levels that are economically justified and reduce or minimize risks to human health and the environment. IPM emphasizes the growth of a healthy crop with the least possible disruption to agro-ecosystems and encourages natural pest control mechanisms." Entomologists and ecologists have urged the adoption of IPM pest control since the 1970s. IPM allows for safer pest control.
The introduction and spread of invasive species can also be managed with IPM by reducing risks while maximizing benefits and reducing costs.
History
Shortly after World War II, when synthetic insecticides became widely available, entomologists in California developed the concept of "supervised insect control". Around the same time, entomologists in the US Cotton Belt were advocating a similar approach. Under this scheme, insect control was "supervised" by qualified entomologists and insecticide applications were based on conclusions reached from periodic monitoring of pest and natural-enemy populations. This was viewed as an alternative to calendar-based programs. Supervised control was based on knowledge of the ecology and analysis of projected trends in pest and natural-enemy populations.
Supervised control formed much of the conceptual basis for the "integrated control" that University of California entomologists articulated in the 1950s. Integrated control sought to identify the best mix of chemical and biological controls for a given insect pest. Chemi
Document 1:::
Insecticides are pesticides used to kill insects. They include ovicides and larvicides used against insect eggs and larvae, respectively. Insecticides are used in agriculture, medicine, industry and by consumers. Insecticides are claimed to be a major factor behind the increase in the 20th-century's agricultural productivity. Nearly all insecticides have the potential to significantly alter ecosystems; many are toxic to humans and/or animals; some become concentrated as they spread along the food chain.
Insecticides can be classified into two major groups: systemic insecticides, which have residual or long-term activity; and contact insecticides, which have no residual activity.
The mode of action describes how the pesticide kills or inactivates a pest. It provides another way of classifying insecticides. Mode of action can be important in understanding whether an insecticide will be toxic to unrelated species, such as fish, birds and mammals.
Insecticides may be repellent or non-repellent. Social insects such as ants cannot detect non-repellents and readily crawl through them. As they return to the nest they take insecticide with them and transfer it to their nestmates. Over time, this eliminates all of the ants including the queen. This is slower than some other methods, but usually completely eradicates the ant colony.
Insecticides are distinct from non-insecticidal repellents, which repel but do not kill.
Type of activity
Systemic insecticides
Systemic insecticides become incorporated and distributed systemically throughout the whole plant. When insects feed on the plant, they ingest the insecticide. Systemic insecticides produced by transgenic plants are called plant-incorporated protectants (PIPs). For instance, a gene that codes for a specific Bacillus thuringiensis biocidal protein was introduced into corn (maize) and other species. The plant manufactures the protein, which kills the insect when consumed.
Contact insecticides
Contact insecticides are
Document 2:::
Pesticides are substances that are meant to control pests. This includes herbicide, insecticide, nematicide, molluscicide, piscicide, avicide, rodenticide, bactericide, insect repellent, animal repellent, microbicide, fungicide, and lampricide. The most common of these are herbicides, which account for approximately 50% of all pesticide use globally. Most pesticides are intended to serve as plant protection products (also known as crop protection products), which in general, protect plants from weeds, fungi, or insects. As an example, the fungus Alternaria solani is used to combat the aquatic weed Salvinia.
In general, a pesticide is a chemical (such as carbamate) or biological agent (such as a virus, bacterium, or fungus) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens, weeds, molluscs, birds, mammals, fish, nematodes (roundworms), and microbes that destroy property, cause nuisance, or spread disease, or are disease vectors. Along with these benefits, pesticides also have drawbacks, such as potential toxicity to humans and other species.
Definition
The Food and Agriculture Organization (FAO) has defined pesticide as:
any substance or mixture of substances intended for preventing, destroying, or controlling any pest, including vectors of human or animal disease, unwanted species of plants or animals, causing harm during or otherwise interfering with the production, processing, storage, transport, or marketing of food, agricultural commodities, wood and wood products or animal feedstuffs, or substances that may be administered to animals for the control of insects, arachnids, or other pests in or on their bodies. The term includes substances intended for use as a plant growth regulator, defoliant, desiccant, or agent for thinning fruit or preventing the premature fall of fruit. Also used as substances applied to crops either before or after harvest to protect the commodity from deterioration dur
Document 3:::
The Federal Plant Pest Act of 1957 (P.L. 85–36) prohibited the movement of pests from a foreign country into or through the United States unless authorized by United States Department of Agriculture (USDA).
It was superseded by the Plant Protection Act of 2000 (P.L. 106–224, Title IV). Under the new law, the Animal and Plant Health Inspection Service (APHIS) retains broad authority to inspect, seize, quarantine, treat, destroy or dispose of imported plant and animal materials that are potentially harmful to U.S. agriculture, horticulture, forestry, and, to a certain degree, natural resources. (7 U.S.C. 7701 et seq.).
Titles of the Act
The 1957 Act was drafted as two titles defining policy standards for the control, eradication, and regulation of plant pests.
Title I - Federal Plant Pest Act - 7 U.S.C. §§ 150aa-150jj
Definitions
Dissemination of plant pests
Postal laws
Seizure of infected plants
Regulations and conditions
Inspections and seizures
Penalty
Separability
Disinfection of railway cars
Repeals
Title II - Eradication and Control of Insect Pests, Plant Diseases, and Nematodes - 7 U.S.C. § 147a
Department of Agriculture Organic Act of 1944 amendment
Document 4:::
Pesticide resistance describes the decreased susceptibility of a pest population to a pesticide that was previously effective at controlling the pest. Pest species evolve pesticide resistance via natural selection: the most resistant specimens survive and pass on their acquired heritable changes traits to their offspring. If a pest has resistance then that will reduce the pesticide's efficacy efficacy and resistance are inversely related.
Cases of resistance have been reported in all classes of pests (i.e. crop diseases, weeds, rodents, etc.), with 'crises' in insect control occurring early-on after the introduction of pesticide use in the 20th century. The Insecticide Resistance Action Committee (IRAC) definition of insecticide resistance is a heritable change in the sensitivity of a pest population that is reflected in the repeated failure of a product to achieve the expected level of control when used according to the label recommendation for that pest species.
Pesticide resistance is increasing. Farmers in the US lost 7% of their crops to pests in the 1940s; over the 1980s and 1990s, the loss was 13%, even though more pesticides were being used. Over 500 species of pests have evolved a resistance to a pesticide. Other sources estimate the number to be around 1,000 species since 1945.
Although the evolution of pesticide resistance is usually discussed as a result of pesticide use, it is important to keep in mind that pest populations can also adapt to non-chemical methods of control. For example, the northern corn rootworm (Diabrotica barberi) became adapted to a corn-soybean crop rotation by spending the year when the field is planted with soybeans in a diapause.
, few new weed killers are near commercialization, and none with a novel, resistance-free mode of action. Similarly, discovery of new insecticides is more expensive and difficult than ever.
Causes
Pesticide resistance probably stems from multiple factors:
Many pest species produce large number
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are commonly used to control insect pests, but can have harmful effects on the environment?
A. toxins
B. fertilizers
C. Herbicides
D. insecticides
Answer:
|
|
sciq-7799
|
multiple_choice
|
Where is the fruiting body normally produced in relation to the food source?
|
[
"in the soil",
"at the surface",
"at the root",
"underneath the surface"
] |
B
|
Relavent Documents:
Document 0:::
The horticulture industry embraces the production, processing and shipping of and the market for fruits and vegetables. As such it is a sector of agribusiness and industrialized agriculture. Industrialized horticulture sometimes also includes the floriculture industry and production and trade of ornamental plants.
Among the most important fruits are:
bananas
Semi-tropical fruits like lychee, guava or tamarillo
Citrus fruits
soft fruits (berries)
apples
stone fruits
Important vegetables include:
Potatoes
Sweet potatoes
Tomatoes
Onions and
Cabbage
In 2013 global fruit production was estimated at . Global vegetable production (including melons) was estimated at with China and India being the two top producing countries.
Value chain
The horticultural value chain includes:
Inputs: elements needed for production; seeds, fertilizers, agrochemicals, farm equipment, irrigation equipment, GMO technology
Production for export: includes fruit and vegetables production and all processes related to growth and harvesting; planting, weeding, spraying, picking
Packing and cold storage: grading, washing, trimming, chopping, mixing, packing, labeling, blast chilling
Processed fruit and vegetables: dried, frozen, preserved, juices, pulps; mostly for increasing shelf life
Distribution and marketing: supermarkets, small scale retailers, wholesalers, food service
Companies
Fruit
Chiquita Brands International
Del Monte Foods
Dole Food Company
Genetically modified crops / GMO
Monsanto/Bayer
Document 1:::
Pomology (from Latin , "fruit", + , "study") is a branch of botany that studies fruits and their cultivation. Someone who researches and practices the science of pomology is called a pomologist. The term fruticulture (from Latin , "fruit", + , "care") is also used to describe the agricultural practice of growing fruits in orchards.
Pomological research is mainly focused on the development, enhancement, cultivation and physiological studies of fruit trees. The goals of fruit tree improvement include enhancement of fruit quality, regulation of production periods, and reduction of production costs.
History
Middle East
In ancient Mesopotamia, pomology was practiced by the Sumerians, who are known to have grown various types of fruit, including dates, grapes, apples, melons, and figs. While the first fruits cultivated by the Egyptians were likely indigenous, such as the palm date and sorghum, more fruits were introduced as other cultural influences were introduced. Grapes and watermelon were found throughout predynastic Egyptian sites, as were the sycamore fig, dom palm and Christ's thorn. The carob, olive, apple and pomegranate were introduced to Egyptians during the New Kingdom. Later, during the Greco-Roman period peaches and pears were also introduced.
Europe
The ancient Greeks and Romans also had a strong tradition of pomology, and they cultivated a wide range of fruits, including apples, pears, figs, grapes, quinces, citron, strawberries, blackberries, elderberries, currants, damson plums, dates, melons, rose hips and pomegranates. Less common fruits were the more exotic azeroles and medlars. Cherries and apricots, both introduced in the 1st century BC, were popular. Peaches were introduced in the 1st century AD from Persia. Oranges and lemons were known but used more for medicinal purposes than in cookery. The Romans, in particular, were known for their advanced methods of fruit cultivation and storage, and they developed many of the techniques that are sti
Document 2:::
Multiplex sensor is a hand-held multiparametric optical sensor developed by Force-A. The sensor is a result of 15 years of research on plant autofluorescence conducted by the CNRS (National Center for Scientific Research) and University of Paris-Sud Orsay. It provides accurate and complete information on the physiological state of the crop, allowing real-time and non-destructive measurements of chlorophyll and polyphenols contents in leaves and fruits.
Technology
Multiplex assesses the chlorophyll and polyphenols indices by making use of two attributes of plant fluorescence: the effect of fluorescence re-absorption by chlorophyll and screening effect of polyphenols.
The sensor is an optical head which contains:
Optical sources (UV, blue, green and red)
Detectors (blue-green or yellow, red and far-red (NIR))
Applications
Alongside with other data, Multiplex is designed to provide input for decision support systems (DSS) for a range of crops, including:
Fertilization applications
Crop quality assessments (nitrogen status, maturity, freshness and disease detection)
As a standalone sensor, Multiplex is a tool for rapid collection of information concerning chlorophyll and flavonoids contents of the plant to be applied on ecophysiological research.
Document 3:::
A community orchard is a collection of fruit trees shared by communities and growing in publicly accessible areas such as public greenspaces, parks, schools, churchyards, allotments or, in the US, abandoned lots. Such orchards are a shared resource and not managed for personal or business profit. Income may be generated to sustain the orchard as a charity, community interest company, or other non-profit structure. What they have in common is that they are cared for by a community of people.
Community orchards are planted for many reasons. They increase the public's access to healthy, organic fruit - especially in areas where the population cannot afford healthy, fresh food. They teach young people where their food comes from. They allow ordinary people to develop organic fruit tree growing skills. And they can make an ordinary park or green space into a community centre, where residents volunteer together to care for and harvest the trees. Community orchards also are a place of celebration. Many groups organize harvest and blossom festivals, cider pressing events, canning workshops and more.
Types of community orchards
Membership orchards
Community orchards are structured in various ways. Some models, such as Copley Orchard in Vancouver, have a membership model. Members are asked to donate $20 a year to cover orchard costs. Membership comes with rights and responsibilities. Members have the right to enjoy the harvest - and the responsibility to care for the trees during stewardship days.
Allotment garden orchards
Other orchards are linked to allotment gardens. Strathcona Community Orchard in Vancouver, B.C., is an example of that. Members pay for the right to grow vegetables or flowers in one of the 200 plots on the site - membership is just $10.00 a year and the plot rental fee is an additional $5 a year. As part of their membership, however, they must attend a certain number of mandatory work party days which take place on the last Sunday of every month exce
Document 4:::
NIAB EMR is a horticultural and agricultural research institute at East Malling, Kent in England, with a specialism in fruit and clonally propagated crop production. In 2016, the institute became part of the NIAB Group.
History
A research station was established on the East Malling site in 1913 on the impetus of local fruit growers. The original buildings are still in use today. Some of the finest and most important research on perennial crops has been conducted on the site, resulting in East Malling’s worldwide reputation. Some of the more well-known developments have been achieved in the areas of plant raising, fruit plant culture (especially the development of rootstocks), fruit breeding, ornamental breeding, fruit storage and the biology and control of pests and diseases.
From 1990 a division of Horticulture Research International (HRI) was on the site. HRI closed in 2009.
In 2016, East Malling Research became part of the National Institute of Agricultural Botany (NIAB) group.
Apple rootstocks
In 1912, Ronald Hatton initiated the work of classification, testing and standardisation of apple tree rootstocks. With the help of Dr Wellington, Hatton sorted out the incorrect naming and mixtures then widespread in apple rootstocks distributed throughout Europe. These verified and distinct apple rootstocks are called the "Malling series". The most widespread used was the M9 rootstock.
Structure
It is situated east of East Malling, and north of the Maidstone East Line. The western half of the site is in East Malling and Larkfield and the eastern half is in Ditton. It is just south of the A20, and between junctions 4 and 5 of the M20 motorway.
Function
Today the Research Centre also acts as a business enterprise centre supported by leading local businesses including QTS Analytical and Network Computing Limited. The conference centre trades as East Malling Ltd, being incorporated on 17 February 2004.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where is the fruiting body normally produced in relation to the food source?
A. in the soil
B. at the surface
C. at the root
D. underneath the surface
Answer:
|
|
sciq-11507
|
multiple_choice
|
A binary molecular compound is a molecular compound that is composed of what?
|
[
"two elements",
"four elements",
"four atoms",
"two atoms"
] |
A
|
Relavent Documents:
Document 0:::
Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2.
In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element.
Classifications
Based on atomicity, molecules can be classified as:
Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic.
Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic.
Triatomic (composed of three atoms). Examples include O3 (ozone).
Polyatomic (composed of three or more atoms). Examples include S8.
Atomicity may vary in different allotropes of the same element.
The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1.
The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2).
Examples
The most common values of atomicity for the first 30 elements in the periodic table are as follows:
Document 1:::
A heteronuclear molecule is a molecule composed of atoms of more than one chemical element. For example, a molecule of water (H2O) is heteronuclear because it has atoms of two different elements, hydrogen (H) and oxygen (O).
Similarly, a heteronuclear ion is an ion that contains atoms of more than one chemical element. For example, the carbonate ion () is heteronuclear because it has atoms of carbon (C) and oxygen (O). The lightest heteronuclear ion is the helium hydride ion (HeH+). This is in contrast to a homonuclear ion, which contains all the same kind of atom, such as the dihydrogen cation, or atomic ions that only contain one atom such as the hydrogen anion (H−).
Document 2:::
A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th
Document 3:::
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma
Document 4:::
A bicyclic molecule () is a molecule that features two joined rings. Bicyclic structures occur widely, for example in many biologically important molecules like α-thujene and camphor. A bicyclic compound can be carbocyclic (all of the ring atoms are carbons), or heterocyclic (the rings' atoms consist of at least two elements), like DABCO. Moreover, the two rings can both be aliphatic (e.g. decalin and norbornane), or can be aromatic (e.g. naphthalene), or a combination of aliphatic and aromatic (e.g. tetralin).
Three modes of ring junction are possible for a bicyclic compound:
In spiro compounds, the two rings share only one single atom, the spiro atom, which is usually a quaternary carbon. An example of a spirocyclic compound is the photochromic switch spiropyran.
In fused/condensed bicyclic compounds, two rings share two adjacent atoms. In other words, the rings share one covalent bond, i.e. the bridgehead atoms are directly connected (e.g. α-thujene and decalin).
In bridged bicyclic compounds, the two rings share three or more atoms, separating the two bridgehead atoms by a bridge containing at least one atom. For example, norbornane, also known as bicyclo[2.2.1]heptane, can be viewed as a pair of cyclopentane rings each sharing three of their five carbon atoms. Camphor is a more elaborate example.
Nomenclature
Bicyclic molecules are described by IUPAC nomenclature. The root of the compound name depends on the total number of atoms in all rings together, possibly followed by a suffix denoting the functional group with the highest priority. Numbering of the carbon chain always begins at one bridgehead atom (where the rings meet) and follows the carbon chain along the longest path, to the next bridgehead atom. Then numbering is continued along the second longest path and so on. Fused and bridged bicyclic compounds get the prefix bicyclo, whereas spirocyclic compounds get the prefix spiro. In between the prefix and the suffix, a pair of brackets with numerals
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A binary molecular compound is a molecular compound that is composed of what?
A. two elements
B. four elements
C. four atoms
D. two atoms
Answer:
|
|
ai2_arc-421
|
multiple_choice
|
When two unequal forces act in opposite directions on a moving object, the object will
|
[
"absorb the forces.",
"come to an immediate stop.",
"continue to move in the same direction.",
"move in the same direction as the larger force."
] |
D
|
Relavent Documents:
Document 0:::
As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction.
Examples
Interaction with ground
When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'.
When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.
Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle.
Gravitational forces
The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravi
Document 1:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 2:::
In physics, and in particular in biomechanics, the ground reaction force (GRF) is the force exerted by the ground on a body in contact with it.
For example, a person standing motionless on the ground exerts a contact force on it (equal to the person's weight) and at the same time an equal and opposite ground reaction force is exerted by the ground on the person.
In the above example, the ground reaction force coincides with the notion of a normal force. However, in a more general case, the GRF will also have a component parallel to the ground, for example when the person is walking – a motion that requires the exchange of horizontal (frictional) forces with the ground.
The use of the word reaction derives from Newton's third law, which essentially states that if a force, called action, acts upon a body, then an equal and opposite force, called reaction, must act upon another body. The force exerted by the ground is conventionally referred to as the reaction, although, since the distinction between action and reaction is completely arbitrary, the expression ground action would be, in principle, equally acceptable.
The component of the GRF parallel to the surface is the frictional force. When slippage occurs the ratio of the magnitude of the frictional force to the normal force yields the coefficient of static friction.
GRF is often observed to evaluate force production in various groups within the community. One of these groups studied often are athletes to help evaluate a subject's ability to exert force and power. This can help create baseline parameters when creating strength and conditioning regimens from a rehabilitation and coaching standpoint. Plyometric jumps such as a drop-jump is an activity often used to build greater power and force which can lead to overall better ability on the playing field. When landing from a safe height in a bilateral comparisons on GRF in relation to landing with the dominant foot first followed by the non-dominant limb, litera
Document 3:::
In physics, the restoring force is a force that acts to bring a body to its equilibrium position. The restoring force is a function only of position of the mass or particle, and it is always directed back toward the equilibrium position of the system. The restoring force is often referred to in simple harmonic motion. The force responsible for restoring original size and shape is called the restoring force.
An example is the action of a spring. An idealized spring exerts a force proportional to the amount of deformation of the spring from its equilibrium length, exerted in a direction oppose the deformation. Pulling the spring to a greater length causes it to exert a force that brings the spring back toward its equilibrium length. The amount of force can be determined by multiplying the spring constant, characteristic of the spring, by the amount of stretch, also known as Hooke's Law.
Another example is of a pendulum. When a pendulum is not swinging all the forces acting on it are in equilibrium. The force due to gravity and the mass of the object at the end of the pendulum is equal to the tension in the string holding the object up. When a pendulum is put in motion, the place of equilibrium is at the bottom of the swing, the location where the pendulum rests. When the pendulum is at the top of its swing the force returning the pendulum to this midpoint is gravity. As a result, gravity may be seen as a restoring force.
See also
Response amplitude operator
Document 4:::
The parallelogram of forces is a method for solving (or visualizing) the results of applying two forces to an object.
When more than two forces are involved, the geometry is no longer parallelogrammatic, but the same principles apply. Forces, being vectors are observed to obey the laws of vector addition, and so the overall (resultant) force due to the application of a number of forces can be found geometrically by drawing vector arrows for each force. For example, see Figure 1. This construction has the same result as moving F2 so its tail coincides with the head of F1, and taking the net force as the vector joining the tail of F1 to the head of F2. This procedure can be repeated to add F3 to the resultant F1 + F2, and so forth.
Newton's proof
Preliminary: the parallelogram of velocity
Suppose a particle moves at a uniform rate along a line from A to B (Figure 2) in a given time (say, one second), while in the same time, the line AB moves uniformly from its position at AB to a position at DC, remaining parallel to its original orientation throughout. Accounting for both motions, the particle traces the line AC. Because a displacement in a given time is a measure of velocity, the length of AB is a measure of the particle's velocity along AB, the length of AD is a measure of the line's velocity along AD, and the length of AC is a measure of the particle's velocity along AC. The particle's motion is the same as if it had moved with a single velocity along AC.
Newton's proof of the parallelogram of force
Suppose two forces act on a particle at the origin (the "tails" of the vectors) of Figure 1. Let the lengths of the vectors F1 and F2 represent the velocities the two forces could produce in the particle by acting for a given time, and let the direction of each represent the direction in which they act. Each force acts independently and will produce its particular velocity whether the other force acts or not. At the end of the given time, the particle has both v
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When two unequal forces act in opposite directions on a moving object, the object will
A. absorb the forces.
B. come to an immediate stop.
C. continue to move in the same direction.
D. move in the same direction as the larger force.
Answer:
|
|
sciq-4252
|
multiple_choice
|
Each line in a structural formula represents a pair of shared what?
|
[
"electrons",
"atoms",
"ions",
"waves"
] |
A
|
Relavent Documents:
Document 0:::
Lewis structuresalso called Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDs)are diagrams that show the bonding between atoms of a molecule, as well as the lone pairs of electrons that may exist in the molecule. A Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. The Lewis structure was named after Gilbert N. Lewis, who introduced it in his 1916 article The Atom and the Molecule. Lewis structures extend the concept of the electron dot diagram by adding lines between atoms to represent shared pairs in a chemical bond.
Lewis structures show each atom and its position in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another (pairs of dots can be used instead of lines). Excess electrons that form lone pairs are represented as pairs of dots, and are placed next to the atoms.
Although main group elements of the second period and beyond usually react by gaining, losing, or sharing electrons until they have achieved a valence shell electron configuration with a full octet of (8) electrons, hydrogen (H) can only form bonds which share just two electrons.
Construction and electron counting
The total number of electrons represented in a Lewis structure is equal to the sum of the numbers of valence electrons on each individual atom. Non-valence electrons are not represented in Lewis structures.
Once the total number of valence electrons has been determined, they are placed into the structure according to these steps:
Initially, one line (representing a single bond) is drawn between each pair of connected atoms.
Each bond consists of a pair of electrons, so if t is the total number of electrons to be placed and n is the number of single bonds just drawn, t−2n electrons remain to be placed. These are temporarily drawn as dots, one per electron, to a maximum of eight per atom (two in the case of hydrogen)
Document 1:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 2:::
The cubical atom was an early atomic model in which electrons were positioned at the eight corners of a cube in a non-polar atom or molecule. This theory was developed in 1902 by Gilbert N. Lewis and published in 1916 in the article "The Atom and the Molecule" and used to account for the phenomenon of valency.
Lewis' theory was based on Abegg's rule. It was further developed in 1919 by Irving Langmuir as the cubical octet atom. The figure below shows structural representations for elements of the second row of the periodic table.
Although the cubical model of the atom was soon abandoned in favor of the quantum mechanical model based on the Schrödinger equation, and is therefore now principally of historical interest, it represented an important step towards the understanding of the chemical bond. The 1916 article by Lewis also introduced the concept of the electron pair in the covalent bond, the octet rule, and the now-called Lewis structure.
Bonding in the cubical atom model
Single covalent bonds are formed when two atoms share an edge, as in structure C below. This results in the sharing of two electrons. Ionic bonds are formed by the transfer of an electron from one cube to another without sharing an edge (structure A). An intermediate state where only one corner is shared (structure B) was also postulated by Lewis.
Double bonds are formed by sharing a face between two cubic atoms. This results in sharing four electrons:
Triple bonds could not be accounted for by the cubical atom model, because there is no way of having two cubes share three parallel edges. Lewis suggested that the electron pairs in atomic bonds have a special attraction, which result in a tetrahedral structure, as in the figure below (the new location of the electrons is represented by the dotted circles in the middle of the thick edges). This allows the formation of a single bond by sharing a corner, a double bond by sharing an edge, and a triple bond by sharing a face. It also accounts
Document 3:::
The SYBYL line notation or SLN is a specification for unambiguously describing the structure of chemical molecules using short ASCII strings. SLN differs from SMILES in several significant ways. SLN can specify molecules, molecular queries, and reactions in a single line notation whereas SMILES handles these through language extensions. SLN has support for relative stereochemistry, it can distinguish mixtures of enantiomers from pure molecules with pure but unresolved stereochemistry. In SMILES aromaticity is considered to be a property of both atoms and bonds whereas in SLN it is a property of bonds.
Description
Like SMILES, SLN is a linear language that describes molecules. This provides a lot of similarity with SMILES despite SLN's many differences from SMILES, and as a result this description will heavily compare SLN to SMILES and its extensions.
Attributes
Attributes, bracketed strings with additional data like [key1=value1, key2...], is a core feature of SLN. Attributes can be applied to atoms and bonds. Attributes not defined officially are available to users for private extensions.
When searching for molecules, comparison operators such as fcharge>-0.125 can be used in place of the usual equal sign. A ! preceding a key/value group inverts the result of the comparison.
Entire molecules or reactions can too have attributes. The square brackets are changed to a pair of <> signs.
Atoms
Anything that starts with an uppercase letter identifies an atom in SLN. Hydrogens are not automatically added, but the single bonds with hydrogen can be abbreviated for organic compounds, resulting in CH4 instead of C(H)(H)(H)H for methane. The author argues that explicit hydrogens allow for more robust parsing.
Attributes defined for atoms include I= for isotope mass number, charge= for formal charge, fcharge for partial charge, s= for stereochemistry, and spin= for radicals (s, d, t respectively for singlet, doublet, triplet). A formal charge of charge=2 can be abbrevi
Document 4:::
A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to:
VSEPR theory, a model of molecular geometry.
Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs.
Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals.
Crystal field theory, an electrostatic model for transition metal complexes.
Ligand field theory, the application of molecular orbital theory to transition metal complexes.
Chemical bonding
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Each line in a structural formula represents a pair of shared what?
A. electrons
B. atoms
C. ions
D. waves
Answer:
|
|
sciq-2783
|
multiple_choice
|
What type of air may get stuck on the windward side of a mountain range?
|
[
"live air",
"brisk air",
"maritime air",
"steady air"
] |
C
|
Relavent Documents:
Document 0:::
The Mountain Wave Project (MWP) pursues global scientific research of gravity waves and associated turbulence. MWP seeks to develop new scientific insights and knowledge through high altitude and record seeking glider flights with the goal of increasing overall flight safety and improving pilot training.
Corporate history
Motivation
Wind movement over terrain and ground obstacles can create wavelike wind formations which can reach up to the stratosphere. In 1998 the pilots René Heise and Klaus Ohlmann founded the MWP, a project for global classification, research, and analysis of orographically created wind structures (e.g. Chinook, Foehn, Mistral, Zonda). The MWP is an independent non-profit-project of the Scientific and Meteorological Section of the Organisation Scientifique et Technique du Vol à Voile (OSTIV) and is supported by the Fédération Aéronautique Internationale (FAI).
The MWP was originally focused on achieving better understanding. of the complex thermal and dynamic air movements in the atmosphere, and using that knowledge to achieve ever greater long distance soaring flights. As MWP gained greater awareness of the power inherent to mountain wave-like structures in the atmosphere, and their strong vertical airflows, it became obvious that they presented great dangers to civil aviation in multiple ways. Therefore, the focus of the MWP shifted to a more scientific approach to the airflow phenomena, with the goal of discovering new ways to increase overall aviation safety. Through the support of other scientists and cooperation partners the core group became more powerful and gained greater depth of knowledge. The integration of Joerg Hacker from the Airborne Research Australia (ARA) into the core group significantly enhanced the overall depth of knowledge of the group.
Airborne measurements
In order to learn more about the relevant physical process in the atmosphere, the MWP Team launched two expeditions in the Argentinean Andes in 1999 and 2006. F
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well.
The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance.
Atmospheric motions
Coriolis force
When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as:
where
is the flow velocity
is the planet's angular velocity vector
This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotat
Document 4:::
In meteorology, wind speed, or wind flow speed, is a fundamental atmospheric quantity caused by air moving from high to low pressure, usually due to changes in temperature. Wind speed is now commonly measured with an anemometer.
Wind speed affects weather forecasting, aviation and maritime operations, construction projects, growth and metabolism rate of many plant species, and has countless other implications. Wind direction is usually almost parallel to isobars (and not perpendicular, as one might expect), due to Earth's rotation.
Units
The metre per second (m/s) is the SI unit for velocity and the unit recommended by the World Meteorological Organization for reporting wind speeds, and is amongst others used in weather forecasts in the Nordic countries. Since 2010 the International Civil Aviation Organization (ICAO) also recommends meters per second for reporting wind speed when approaching runways, replacing their former recommendation of using kilometres per hour (km/h).
For historical reasons, other units such as miles per hour (mph), knots (kn) or feet per second (ft/s) are also sometimes used to measure wind speeds. Historically, wind speeds have also been classified using the Beaufort scale, which is based on visual observations of specifically defined wind effects at sea or on land.
Factors affecting wind speed
Wind speed is affected by a number of factors and situations, operating on varying scales (from micro to macro scales). These include the pressure gradient, Rossby waves and jet streams, and local weather conditions. There are also links to be found between wind speed and wind direction, notably with the pressure gradient and terrain conditions.
Pressure gradient is a term to describe the difference in air pressure between two points in the atmosphere or on the surface of the Earth. It is vital to wind speed, because the greater the difference in pressure, the faster the wind flows (from the high to low pressure) to balance out the variation. Th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of air may get stuck on the windward side of a mountain range?
A. live air
B. brisk air
C. maritime air
D. steady air
Answer:
|
|
sciq-3023
|
multiple_choice
|
The electrons in a water molecule are more concentrated around the more highly charged oxygen nucleus than around this?
|
[
"hydrogen nuclei",
"carbon nuclei",
"peroxide nuclei",
"helium nuclei"
] |
A
|
Relavent Documents:
Document 0:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 1:::
The protonosphere is a layer of the Earth's atmosphere (or any planet with a similar atmosphere) where the dominant components are atomic hydrogen and ionic hydrogen (protons). It is the outer part of the ionosphere, and extends to the interplanetary medium. Hydrogen dominates in the outermost layers because it is the lightest gas, and in the heterosphere, mixing is not strong enough to overcome differences in constituent gas densities. Charged particles are created by incoming ionizing radiation, mostly from solar radiation.
Document 2:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 3:::
The rms charge radius is a measure of the size of an atomic nucleus, particularly the proton distribution. The proton radius is approximately one femtometre = . It can be measured by the scattering of electrons by the nucleus. Relative changes in the mean squared nuclear charge distribution can be precisely measured with atomic spectroscopy.
Definition
The problem of defining a radius for the atomic nucleus has some similarity to that of defining a radius for the entire atom; neither have well defined boundaries. However, basic liquid drop models of the nucleus imagine a fairly uniform density of nucleons, theoretically giving a more recognizable surface to a nucleus than an atom, the latter being composed of highly diffuse electron clouds with density gradually reducing away from the centre. For individual protons and neutrons or small nuclei, the concepts of size and boundary can be less clear. A single nucleon needs to be regarded as a "color confined" bag of three valence quarks, binding gluons and so called "sea" of quark-antiquark pairs. Additionally, the nucleon is surrounded by its Yukawa pion field responsible for the strong nuclear force. It could be difficult to decide whether to include the surrounding Yukawa meson field as part of the proton or nucleon size or to regard it as a separate entity.
Fundamentally important are realizable experimental procedures to measure some aspect of size, whatever that may mean in the quantum realm of atoms and nuclei. Foremost, the nucleus can be modeled as a sphere of positive charge for the interpretation of electron scattering experiments: the electrons "see" a range of cross-sections, for which a mean can be taken. The qualification of "rms" (for "root mean square") arises because it is the nuclear cross-section, proportional to the square of the radius, which is determining for electron scattering.
This definition of charge radius is often applied to composite hadrons such as a proton, neutron, pion, or kaon,
Document 4:::
Electric potential energy is a potential energy (measured in joules) that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects.
The term "electric potential energy" is used to describe the potential energy in systems with time-variant electric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields.
Definition
The electric potential energy of a system of point charges is defined as the work required to assemble this system of charges by bringing them close together, as in the system from an infinite distance. Alternatively, the electric potential energy of any given charge or system of charges is termed as the total work done by an external agent in bringing the charge or the system of charges from infinity to the present configuration without undergoing any acceleration.
The electrostatic potential energy can also be defined from the electric potential as follows:
Units
The SI unit of electric potential energy is joule (named after the English physicist James Prescott Joule). In the CGS system the erg is the unit of energy, being equal to 10−7 Joules. Also electronvolts may be used, 1 eV = 1.602×10−19 Joules.
Electrostatic potential energy of one point charge
One point charge q in the presence of another point charge Q
The electrostatic potential energy, UE, of one point charge q at position r in the presence of a point charge Q, taking an infinite separation between the charges as the reference position, is:
where is the Coulomb constant, r is the distance between the point charges q and Q, and q and Q are the charges (not the absolute values of the charges—i.e., an electron would have a negative value of charge when
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The electrons in a water molecule are more concentrated around the more highly charged oxygen nucleus than around this?
A. hydrogen nuclei
B. carbon nuclei
C. peroxide nuclei
D. helium nuclei
Answer:
|
|
sciq-3517
|
multiple_choice
|
How do organisms grow and repair themselves?
|
[
"cell death",
"symbosis",
"mutation",
"cell division"
] |
D
|
Relavent Documents:
Document 0:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 1:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 2:::
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples.
Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Interaction between organisms. the processes
Document 3:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 4:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do organisms grow and repair themselves?
A. cell death
B. symbosis
C. mutation
D. cell division
Answer:
|
|
sciq-5860
|
multiple_choice
|
Many parasites have complex life cycles involving multiple ?
|
[
"hosts",
"features",
"viruses",
"diseases"
] |
A
|
Relavent Documents:
Document 0:::
Archaeoparasitology, a multi-disciplinary field within paleopathology, is the study of parasites in archaeological contexts. It includes studies of the protozoan and metazoan parasites of humans in the past, as well as parasites which may have affected past human societies, such as those infesting domesticated animals.
Reinhard suggested that the term "archaeoparasitology" be applied to "... all parasitological remains excavated from archaeological contexts ... derived from human activity" and that "the term 'paleoparasitology' be applied to studies of nonhuman, paleontological material." (p. 233) Paleoparasitology includes all studies of ancient parasites outside of archaeological contexts, such as those found in amber, and even dinosaur parasites.
The first archaeoparasitology report described calcified eggs of Bilharzia haematobia (now Schistosoma haematobium) from the kidneys of an ancient Egyptian mummy. Since then, many fundamental archaeological questions have been answered by integrating our knowledge of the hosts, life cycles and basic biology of parasites, with the archaeological, anthropological and historical contexts in which they are found.
Parasitology basics
Parasites are organisms which live in close association with another organism, called the host, in which the parasite benefits from the association, to the detriment of the host. Many other kinds of associations may exist between two closely allied organisms, such as commensalism or mutualism.
Endoparasites (such as protozoans and helminths), tend to be found inside the host, while ectoparasites (such as ticks, lice and fleas) live on the outside of the host body. Parasite life cycles often require that different developmental stages pass sequentially through multiple host species in order to successfully mature and reproduce. Some parasites are very host-specific, meaning that only one or a few species of hosts are capable of perpetuating their life cycle. Others are not host-spec
Document 1:::
A heteroecious parasite is one that requires at least two hosts. The primary host is the host in which the parasite spends its adult life; the other is the secondary host. Both hosts are required for the parasite to complete its life cycle. This can be contrasted with an autoecious parasite which can complete its life cycle on a single host species. Many rust fungi have heteroecious life cycles:
In parasitology, heteroxeny, or heteroxenous development, is a synonymous term that characterizes a parasite whose development involves several hosts.
Fungal examples
Gymnosporangium (Cedar-apple rust): the juniper is the primary (telial) host and the apple, pear or hawthorn is the secondary (aecial) host.
Cronartium ribicola (White pine blister rust): the primary host are white pines, and currants the secondary.
Hemileia vastatrix (Coffee rust): the primary host is coffee plant, and the alternate host is unknown.
Puccinia graminis (Stem rust): the primary hosts include Kentucky bluegrass, barley, and wheat; barberry is the alternate host.
Puccinia coronata var. avenae (Crown rust of oats): Oats are the primary host; Rhamnus spp. (Buckthorns) are the alternate hosts.
Phakopsora meibomiae and P. pachyrhizi (Soybean Rust): the primary host is soybean and various legumes. The alternate host is unknown.
Puccinia porri (Leek rust): autoecious
History
The phenomenon of heteroecy was first discovered by A.S. Ørsted in 1863.
Document 2:::
In experimental physics, and particularly in high energy and nuclear physics, a parasite experiment or parasitic experiment is an experiment performed using a big particle accelerator or other large facility, without interfering with the scheduled experiments of that facility. This allows the experimenters to proceed without the usual competitive time scheduling procedure. These experiments may be instrument tests or experiments whose scientific interest has not been clearly established.
Further reading
Experimental particle physics
Document 3:::
Parasite Rex: Inside the Bizarre World of Nature's Most Dangerous Creatures is a nonfiction book by Carl Zimmer that was published by Free Press in 2000. The book discusses the history of parasites on Earth and how the field and study of parasitology formed, along with a look at the most dangerous parasites ever found in nature. A special paperback edition was released in March 2011 for the tenth anniversary of the book's publishing, including a new epilogue written by Zimmer. Signed bookplates were also given to fans that sent in a photo of themselves with a copy of the special edition.
The cover of Parasite Rex includes a scanning electron microscope image of a tick as the focus, along with illustrations in the centerfold of parasites and topics discussed in the book.
Content
The book begins by discussing the history of parasites in human knowledge, from the earliest writings about them in ancient cultures, up through modern times. The focus comes to rest extensively on the views and experiments conducted by scientists in the 17th, 18th, and 19th centuries, such as those done by Antonie van Leeuwenhoek, Japetus Steenstrup, Friedrich Küchenmeister, and Ray Lankester. Among them, Leeuwenhoek was the first to ever physically view cells through a microscope, Steenstrup was the first to explain and confirm the multiple stages and life cycles of parasites that are different from most other living organisms, and Küchenmeister, through his religious beliefs and his views on every creature having a place in the natural order, denied the ideas of his time and proved that all parasites are a part of active evolutionary niches and not biological dead ends by conducting morally ambiguous experiments on prisoners. Lankester is given a specific focus and repeated discussion throughout the book due to his belief that parasites are examples of degenerative evolution, especially in regards to Sacculina, and Zimmer's repeated refutation of this idea.
Several chapters are taken to
Document 4:::
Parasitology is the study of parasites, their hosts, and the relationship between them. As a biological discipline, the scope of parasitology is not determined by the organism or environment in question but by their way of life. This means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology, bioinformatics, biochemistry, molecular biology, immunology, genetics, evolution and ecology.
Fields
The study of these diverse organisms means that the subject is often broken up into simpler, more focused units, which use common techniques, even if they are not studying the same organisms or diseases. Much research in parasitology falls somewhere between two or more of these definitions. In general, the study of prokaryotes falls under the field of bacteriology rather than parasitology.
Medical
The parasitologist F. E. G. Cox noted that "Humans are hosts to nearly 300 species of parasitic worms and over 70 species of protozoa, some derived from our primate ancestors and some acquired from the animals we have domesticated or come in contact with during our relatively short history on Earth".
One of the largest fields in parasitology, medical parasitology is the subject that deals with the parasites that infect humans, the diseases caused by them, clinical picture and the response generated by humans against them. It is also concerned with the various methods of their diagnosis, treatment and finally their prevention & control.
A parasite is an organism that live on or within another organism called the host.
These include organisms such as:
Plasmodium spp., the protozoan parasite which causes malaria. The four species infective to humans are P. falciparum, P. malariae, P. vivax and P. ovale.
Leishmania, unicellular organisms which cause leishmaniasis
Entamoeba and Giardia, which cause intestinal infections (dysentery and diarrhoea)
Multicellular organisms and intestinal worms (helminths) such as Schistosoma spp., Wuchereri
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Many parasites have complex life cycles involving multiple ?
A. hosts
B. features
C. viruses
D. diseases
Answer:
|
|
sciq-702
|
multiple_choice
|
In harmonic motion there is always what force, which acts in the opposite direction of the velocity?
|
[
"magnetic force",
"locomotion force",
"inorganic force",
"restorative force"
] |
D
|
Relavent Documents:
Document 0:::
<noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that
Document 1:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 2:::
Dynamics is the branch of classical mechanics that is concerned with the study of forces and their effects on motion. Isaac Newton was the first to formulate the fundamental physical laws that govern dynamics in classical non-relativistic physics, especially his second law of motion.
Principles
Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes. In addition, Newton established the fundamental physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular, dynamics is mostly related to Newton's second law of motion. However, all three laws of motion are taken into account because these are interrelated in any given observation or experiment.
Linear and rotational dynamics
The study of dynamics falls under two categories: linear and rotational. Linear dynamics pertains to objects moving in a line and involves such quantities as force, mass/inertia, displacement (in units of distance), velocity (distance per unit time), acceleration (distance per unit of time squared) and momentum (mass times unit of velocity). Rotational dynamics pertains to objects that are rotating or moving in a curved path and involves such quantities as torque, moment of inertia/rotational inertia, angular displacement (in radians or less often, degrees), angular velocity (radians per unit time), angular acceleration (radians per unit of time squared) and angular momentum (moment of inertia times unit of angular velocity). Very often, objects exhibit linear and rotational motion.
For classical electromagnetism, Maxwell's equations describe the kinematics. The dynamics of classical systems involving both mechanics and electromagnetism are described by the combination of Newton's laws, Maxwell's equations, and the Lorentz force.
Force
From Newton, force can be defined as an exertion or pressure which can cause an object to ac
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
The Quarterly Journal of Mechanics and Applied Mathematics is a quarterly, peer-reviewed scientific journal covering research on classical mechanics and applied mathematics. The editors-in-chief are P. W. Duck, P. A. Martin and N. V. Movchan. The journal was established in 1948 to meet a need for a separate English journal that publishes articles focusing on classical mechanics only, in particular, including fluid mechanics and solid mechanics, that were usually published in journals like Proceedings of the Royal Society and Philosophical Transactions of the Royal Society.
Abstracting and indexing
The journal is abstracted and indexed in,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In harmonic motion there is always what force, which acts in the opposite direction of the velocity?
A. magnetic force
B. locomotion force
C. inorganic force
D. restorative force
Answer:
|
|
sciq-6862
|
multiple_choice
|
How many alleles control a characteristic?
|
[
"4",
"8",
"2",
"3"
] |
C
|
Relavent Documents:
Document 0:::
The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005).
The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways.
Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated.
Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems).
Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability).
The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics.
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Research on the heritability of IQ inquires into the degree of variation in IQ within a population that is due to genetic variation between individuals in that population. There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century. Intelligence in the normal range is a polygenic trait, meaning that it is influenced by more than one gene, and in the case of intelligence at least 500 genes. Further, explaining the similarity in IQ of closely related persons requires careful study because environmental factors may be correlated with genetic factors.
Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%, with some recent studies showing heritability for IQ as high as 80%. IQ goes from being weakly correlated with genetics for children, to being strongly correlated with genetics for late teens and adults. The heritability of IQ increases with the child's age and reaches a plateau at 14-16 years old, continuing at that level well into adulthood. However, poor prenatal environment, malnutrition and disease are known to have lifelong deleterious effects.
Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.
Heritability and caveats
Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?"
Estimates of heritabi
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many alleles control a characteristic?
A. 4
B. 8
C. 2
D. 3
Answer:
|
|
sciq-4705
|
multiple_choice
|
The other ammonium ions are changed into nitrogen gas by what?
|
[
"denitrifying bacteria",
"fungi",
"fluctuations bacteria",
"accompanying bacteria"
] |
A
|
Relavent Documents:
Document 0:::
In chemistry, ammonolysis (/am·mo·nol·y·sis/) is the process of splitting ammonia into NH2- + H+. Ammonolysis reactions can be conducted with organic compounds to produce amines (molecules containing a nitrogen atom with a lone pair, :N), or with inorganic compounds to produce nitrides. This reaction is analogous to hydrolysis in which water molecules are split. Similar to water, liquid ammonia also undergoes auto-ionization, {2 NH3 ⇌ NH4+ + NH2- }, where the rate constant is k = 1.9 × 10-38.
Organic compounds such as alkyl halides, hydroxyls (hydroxyl nitriles and carbohydrates), carbonyl (aldehydes/ketones/esters/alcohols), and sulfur (sulfonyl derivatives) can all undergo ammonolysis in liquid ammonia.
Organic synthesis
Mechanism: ammonolysis of esters
This mechanism is similar to the hydrolysis of esters, the ammonia attacks the electrophilic carbonyl carbon forming a tetrahedral intermediate. The reformation of the C-O double bond ejects the ester. The alkoxide deprotonates the ammonia forming an alcohol and amide as products.
Of haloalkanes
On heating a haloalkane and concentrated ammonia in a sealed tube with ethanol, a series of amines are formed along with their salts. The tertiary amine is usually the major product.
{NH3 ->[\ce{RX}] RNH2 ->[\ce{RX}] R2NH ->[\ce{RX}] R3N ->[\ce{RX}] R4N+}
This is known as Hoffmann's ammonolysis.
Of alcohols
Alcohols can also undergo ammonolysis when in the presence of ammonia. An example is the conversion of phenol to aniline, catalyzed by stannic chloride.
ROH + NH3 A ->[\ce{TnCl4}] RNH2 + H2O
Document 1:::
Ammonia solution, also known as ammonia water, ammonium hydroxide, ammoniacal liquor, ammonia liquor, aqua ammonia, aqueous ammonia, or (inaccurately) ammonia, is a solution of ammonia in water. It can be denoted by the symbols NH3(aq). Although the name ammonium hydroxide suggests an alkali with the composition , it is actually impossible to isolate samples of NH4OH. The ions and OH− do not account for a significant fraction of the total amount of ammonia except in extremely dilute solutions.
Basicity of ammonia in water
In aqueous solution, ammonia deprotonates a small fraction of the water to give ammonium and hydroxide according to the following equilibrium:
NH3 + H2O + OH−.
In a 1 M ammonia solution, about 0.42% of the ammonia is converted to ammonium, equivalent to pH = 11.63
because [] = 0.0042 M, [OH−] = 0.0042 M, [NH3] = 0.9958 M, and pH = 14 + log10[OH−] = 11.62. The base ionization constant is
Kb = = 1.77.
Saturated solutions
Like other gases, ammonia exhibits decreasing solubility in solvent liquids as the temperature of the solvent increases. Ammonia solutions decrease in density as the concentration of dissolved ammonia increases. At , the density of a saturated solution is 0.88 g/ml and contains 35.6% ammonia by mass, 308 grams of ammonia per litre of solution, and has a molarity of approximately 18 mol/L. At higher temperatures, the molarity of the saturated solution decreases and the density increases. Upon warming of saturated solutions, ammonia gas is released.
Applications
In contrast to anhydrous ammonia, aqueous ammonia finds few non-niche uses outside of cleaning agents.
Household cleaner
Diluted (1–3%) ammonia is also an ingredient of numerous cleaning agents, including many window cleaning formulas. Because aqueous ammonia is a gas dissolved in water, as the water evaporates from a window, the gas evaporates also, leaving the window streak-free.
In addition to use as an ingredient in cleansers with other cleansing ingredients,
Document 2:::
Dissimilatory nitrate reduction to ammonium (DNRA), also known as nitrate/nitrite ammonification, is the result of anaerobic respiration by chemoorganoheterotrophic microbes using nitrate (NO3−) as an electron acceptor for respiration. In anaerobic conditions microbes which undertake DNRA oxidise organic matter and use nitrate (rather than oxygen) as an electron acceptor, reducing it to nitrite, then ammonium (NO3−→NO2−→NH4+).
Dissimilatory nitrate reduction to ammonium is more common in prokaryotes but may also occur in eukaryotic microorganisms. DNRA is a component of the terrestrial and oceanic nitrogen cycle. Unlike denitrification, it acts to conserve bioavailable nitrogen in the system, producing soluble ammonium rather than unreactive dinitrogen gas.
Background and process
Cellular process
Dissimilatory nitrate reduction to ammonium is a two step process, reducing NO3− to NO2− then NO2− to NH4+, though the reaction may begin with NO2− directly. Each step is mediated by a different enzyme, the first step of dissimilatory nitrate reduction to ammonium is usually mediated by a periplasmic nitrate reductase. The second step (respiratory NO2− reduction to NH4+) is mediated by cytochrome c nitrite reductase, occurring at the periplasmic membrane surface. Despite DNRA not producing N2O as an intermediate during nitrate reduction (as denitrification does) N2O may still be released as a byproduct, thus DNRA may also act as a sink of fixed, bioavailable nitrogen. DNRA's production of N2O may be enhanced at higher pH levels.
Denitrification
Dissimilatory nitrate reduction to ammonium is similar to the process of denitrification, though NO2− is reduced farther to NH4+ rather than to N2, transferring eight electrons. Both denitrifiers and nitrate ammonifiers are competing for NO3− in the environment. Despite the redox potential of dissimilatory nitrate reduction to ammonium being lower than denitrification and producing less Gibbs free energy, energy yield of denitr
Document 3:::
Reactive nitrogen ("Nr"), also known as fixed nitrogen, refers to all forms of nitrogen present in the environment except for molecular nitrogen (). While nitrogen is an essential element for life on Earth, molecular nitrogen is comparatively unreactive, and must be converted to other chemical forms via nitrogen fixation before it can be used for growth. Common Nr species include nitrogen oxides (), ammonia (), nitrous oxide (), as well as the anion nitrate ().
Biologically, nitrogen is "fixed" mainly by the microbes (eg., Bacteria and Archaea) of the soil that fix into mainly but also other species. Legumes, a type of plant in the Fabacae family, are symbionts to some of these microbes that fix . is a building block to Amino acids and proteins amongst other things essential for life. However, just over half of all reactive nitrogen entering the biosphere is attributable to anthropogenic activity such as industrial fertilizer production. While reactive nitrogen is eventually converted back into molecular nitrogen via denitrification, an excess of reactive nitrogen can lead to problems such as eutrophication in marine ecosystems.
Reactive nitrogen compounds
In the environmental context, reactive nitrogen compounds include the following classes:
oxide gases: nitric oxide, nitrogen dioxide, nitrous oxide. Containing oxidized nitrogen, mainly the result of industrial processes and internal combustion engines.
anions: nitrate, nitrite. Nitrate is a common component of fertilizers, e.g. ammonium nitrate.
amine derivatives: ammonia and ammonium salts, urea. Containing reduced nitrogen, these compounds are components of fertilizers.
All of these compounds enter into the nitrogen cycle.
As a consequence, an excess of Nr can affect the environment relatively quickly. This also means that nitrogen-related problems need to be looked at in an integrated manner.
See also
Human impact on the nitrogen cycle
Document 4:::
Nitrous acid (molecular formula ) is a weak and monoprotic acid known only in solution, in the gas phase and in the form of nitrite () salts. Nitrous acid is used to make diazonium salts from amines. The resulting diazonium salts are reagents in azo coupling reactions to give azo dyes.
Structure
In the gas phase, the planar nitrous acid molecule can adopt both a syn and an anti form. The anti form predominates at room temperature, and IR measurements indicate it is more stable by around 2.3 kJ/mol.
Preparation
Nitrous acid is usually generated by acidification of aqueous solutions of sodium nitrite with a mineral acid. The acidification is usually conducted at ice temperatures, and the HNO2 is consumed in situ. Free nitrous acid is unstable and decomposes rapidly.
Nitrous acid can also be produced by dissolving dinitrogen trioxide in water according to the equation
N2O3 + H2O → 2 HNO2
Reactions
Nitrous acid is the main chemphore in the Liebermann reagent, used to spot-test for alkaloids.
Decomposition
Gaseous nitrous acid, which is rarely encountered, decomposes into nitrogen dioxide, nitric oxide, and water:
2 HNO2 → NO2 + NO + H2O
Nitrogen dioxide disproportionates into nitric acid and nitrous acid in aqueous solution:
2 NO2 + H2O → HNO3 + HNO2
In warm or concentrated solutions, the overall reaction amounts to production of nitric acid, water, and nitric oxide:
3 HNO2 → HNO3 + 2 NO + H2O
The nitric oxide can subsequently be re-oxidized by air to nitric acid, making the overall reaction:
2 HNO2 + O2 → 2 HNO3
Reduction
With I− and Fe2+ ions, NO is formed:
2 HNO2 + 2 KI + 2 H2SO4 → I2 + 2 NO + 2 H2O + 2 K2SO4
2 HNO2 + 2 FeSO4 + 2 H2SO4 → Fe2(SO4)3 + 2 NO + 2 H2O + K2SO4
With Sn2+ ions, N2O is formed:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The other ammonium ions are changed into nitrogen gas by what?
A. denitrifying bacteria
B. fungi
C. fluctuations bacteria
D. accompanying bacteria
Answer:
|
|
sciq-5280
|
multiple_choice
|
What does rearrangements of chromosomes contribute to the emergence of?
|
[
"extinction",
"new species",
"clones",
"fewer species"
] |
B
|
Relavent Documents:
Document 0:::
The Bateson Lecture is an annual genetics lecture held as a part of the John Innes Symposium since 1972, in honour of the first Director of the John Innes Centre, William Bateson.
Past Lecturers
Source: John Innes Centre
1951 Sir Ronald Fisher - "Statistical methods in Genetics"
1953 Julian Huxley - "Polymorphic variation: a problem in genetical natural history"
1955 Sidney C. Harland - "Plant breeding: present position and future perspective"
1957 J.B.S. Haldane - "The theory of evolution before and after Bateson"
1959 Kenneth Mather - "Genetics Pure and Applied"
1972 William Hayes - "Molecular genetics in retrospect"
1974 Guido Pontecorvo - "Alternatives to sex: genetics by means of somatic cells"
1976 Max F. Perutz - "Mechanism of respiratory haemoglobin"
1979 J. Heslop-Harrison - "The forgotten generation: some thoughts on the genetics and physiology of Angiosperm Gametophytes "
1982 Sydney Brenner - "Molecular genetics in prospect"
1984 W.W. Franke - "The cytoskeleton - the insoluble architectural framework of the cell"
1986 Arthur Kornberg - "Enzyme systems initiating replication at the origin of the E. coli chromosome"
1988 Gottfried Schatz - "Interaction between mitochondria and the nucleus"
1990 Christiane Nusslein-Volhard - "Axis determination in the Drosophila embryo"
1992 Frank Stahl - "Genetic recombination: thinking about it in phage and fungi"
1994 Ira Herskowitz - "Violins and orchestras: what a unicellular organism can do"
1996 R.J.P. Williams - "An Introduction to Protein Machines"
1999 Eugene Nester - "DNA and Protein Transfer from Bacteria to Eukaryotes - the Agrobacterium story"
2001 David Botstein - "Extracting biological information from DNA Microarray Data"
2002 Elliot Meyerowitz
2003 Thomas Steitz - "The Macromolecular machines of gene expression"
2008 Sean Carroll - "Endless flies most beautiful: the role of cis-regulatory sequences in the evolution of animal form"
2009 Sir Paul Nurse - "Genetic transmission through
Document 1:::
In evolutionary biology, megatrajectories are the major evolutionary milestones and directions in the evolution of life.
Posited by A. H. Knoll and Richard K. Bambach in their 2000 collaboration, "Directionality in the History of Life," Knoll and Bamback argue that, in consideration of the problem of progress in evolutionary history, a middle road that encompasses both contingent and convergent features of biological evolution may be attainable through the idea of the megatrajectory:
We believe that six broad megatrajectories capture the essence of vectoral change in the history of life. The megatrajectories for a logical sequence dictated by the necessity for complexity level N to exist before N+1 can evolve...In the view offered here, each megatrajectory adds new and qualitatively distinct dimensions to the way life utilizes ecospace.
According to Knoll and Bambach, the six megatrajectories outlined by biological evolution thus far are:
the origin of life to the "Last Common Ancestor"
prokaryote diversification
unicellular eukaryote diversification
multicellular organisms
land organisms
appearance of intelligence and technology
Milan M. Ćirković and Robert Bradbury, have taken the megatrajectory concept one step further by theorizing that a seventh megatrajectory exists: postbiological evolution triggered by the emergence of artificial intelligence at least equivalent to the biologically-evolved one, as well as the invention of several key technologies of the similar level of complexity and environmental impact, such as molecular nanoassembling or stellar uplifting.
See also
Intelligence principle
Document 2:::
Paleopolyploidy is the result of genome duplications which occurred at least several million years ago (MYA). Such an event could either double the genome of a single species (autopolyploidy) or combine those of two species (allopolyploidy). Because of functional redundancy, genes are rapidly silenced or lost from the duplicated genomes. Most paleopolyploids, through evolutionary time, have lost their polyploid status through a process called diploidization, and are currently considered diploids, e.g., baker's yeast, Arabidopsis thaliana, and perhaps humans.
Paleopolyploidy is extensively studied in plant lineages. It has been found that almost all flowering plants have undergone at least one round of genome duplication at some point during their evolutionary history. Ancient genome duplications are also found in the early ancestor of vertebrates (which includes the human lineage) near the origin of the bony fishes, and another in the stem lineage of teleost fishes. Evidence suggests that baker's yeast (Saccharomyces cerevisiae), which has a compact genome, experienced polyploidization during its evolutionary history.
The term mesopolyploid is sometimes used for species that have undergone whole genome multiplication events (whole genome duplication, whole genome triplification, etc.) in more recent history, such as within the last 17 million years.
Eukaryotes
Ancient genome duplications are widespread throughout eukaryotic lineages, particularly in plants. Studies suggest that the common ancestor of Poaceae, the grass family which includes important crop species such as maize, rice, wheat, and sugar cane, shared a whole genome duplication about . In more ancient monocot lineages one or likely multiple rounds of additional whole genome duplications had occurred, which were however not shared with the ancestral eudicots. Further independent more recent whole genome duplications have occurred in the lineages leading to maize, sugar cane and wheat, but not ric
Document 3:::
In biology, polymorphism is the occurrence of two or more clearly different forms or phenotypes in a population of a species. Different types of polymorphism have been identified and are listed separately.
General
Chromosomal polymorphism
In 1973, M. J. D. White, then at the end of a long career investigating karyotypes, gave an interesting summary of the distribution of chromosome polymorphism.
"It is extremely difficult to get an adequate idea as to what fraction of the species of eukaryote organisms actually are polymorphic for structural rearrangements of the chromosomes. In Dipterous flies with polytene chromosomes... the figure is somewhere between 60 and 80 percent... In grasshoppers pericentric inversion polymorphism is shown by only a small number of species. But in this group polymorphism for super-numerary chromosomes and chromosome regions is very strongly developed in many species."
"It is clear that the nature of natural populations is a very complicated subject, and it now appears probable that adaptation of the various genotypes to different ecological niches and frequency-dependent selection are at least as important, and probably more important in many cases, than simple heterosis (in the sense of increased viability or fecundity of the heterozygote)".
This suggests, once again, that polymorphism is a common and important aspect of adaptive evolution in natural populations.
Sexual dimorphism
Humans
Human blood groups
All the common blood types, such as the ABO blood group system, are genetic polymorphisms. Here we see a system where there are more than two morphs: the phenotypes A, B, AB and O are present in all human populations, but vary in proportion in different parts of the world. The phenotypes are controlled by multiple alleles at one locus. These polymorphisms are seemingly never eliminated by natural selection; the reason came from a study of disease statistics.
Statistical research has shown that an individual of a given phenot
Document 4:::
Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree.
Evolutionary trends
Differences between plant and animal physiology and reproduction cause minor differences in how they evolve.
One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life.
The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does rearrangements of chromosomes contribute to the emergence of?
A. extinction
B. new species
C. clones
D. fewer species
Answer:
|
|
sciq-9030
|
multiple_choice
|
Plant-like protists are autotrophs capable of what process?
|
[
"sexual reproduction",
"photosynthesis",
"regeneration",
"microevolution"
] |
B
|
Relavent Documents:
Document 0:::
Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods.
Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s.
Steps
In short, steps of micropropagation can be divided into four stages:
Selection of mother plant
Multiplication
Rooting and acclimatizing
Transfer new plant to soil
Selection of mother plant
Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used; including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micro nutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media f
Document 1:::
A plantoid is a robot or synthetic organism designed to look, act and grow like a plant. The concept was first scientifically published in 2010 (although models of comparable systems controlled by neural networks date back to 2003) and has so far remained largely theoretical. Plantoids imitate plants through appearances and mimicking behaviors and internal processes (which function to keep the plant alive or to ensure its survival). A prototype for the European Commission is now in development by сonsortium of the following scientists: Dario Floreano, Barbara Mazzolai, Josep Samitier, Stefano Mancuso.
A plantoid incorporates an inherently distributed architecture consisting of autonomous and specialized modules. Modules can be modeled on plant parts such as the root cap and communicate to form a simple swarm intelligence. This kind of system may display great robustness and resilience. It is conjectured to be capable of energy harvesting and management, collective environmental awareness and many other functions.
In science fiction, while human-like robots (androids) are fairly frequent and animal-like biomorphic robots turn up occasionally, plantoids are quite rare. Exceptions occur in the novel Hearts, Hands and Voices (1992, US: The Broken Land) by Ian McDonald and the TV series Jikuu Senshi Spielban.
Systems and Processes
Like plants, plantoids position its roots and appendages (projecting parts of the plantoid) towards beneficial conditions that stimulate growth (i.e sunlight, ideal temperatures, areas with larger water concentration) and away from factors that bar growth. This occurs through a combination of information from its sensors and the plantoid reacting accordingly.
Sensors
The use of soft tactical sensors (devices that gather information based on the surrounding physical environment) allows the plantoid to navigate its way through its environment. These sensors relay information to the plantoid and produce signals, similar to how a computer ca
Document 2:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 3:::
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples.
Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Interaction between organisms. the processes
Document 4:::
Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym.
Plants
Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis.
In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions.
Light
Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Plant-like protists are autotrophs capable of what process?
A. sexual reproduction
B. photosynthesis
C. regeneration
D. microevolution
Answer:
|
|
sciq-8177
|
multiple_choice
|
Which part of the brain regulates the rate of breathing?
|
[
"brain uptake",
"brain stem",
"brain stem",
"brain charge"
] |
B
|
Relavent Documents:
Document 0:::
The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration.
The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate.
The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center.
Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group.
Dorsal respiratory group – in the medulla
Ventral respiratory group – in the medulla
Pneumotaxic center – various nuclei of the pons
Apneustic center – nucleus of the pons
From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs.
Control of respiratory rhythm
Ventilatory pattern
Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh
Document 1:::
Cerebral autoregulation is a process in mammals that aims to maintain adequate and stable cerebral blood flow. While most systems of the body show some degree of autoregulation, the brain is very sensitive to over- and underperfusion. Cerebral autoregulation plays an important role in maintaining an appropriate blood flow to that region. Brain perfusion is essential for life, since the brain has a high metabolic demand. By means of cerebral autoregulation, the body is able to deliver sufficient blood containing oxygen and nutrients to the brain tissue for this metabolic need, and remove CO2 and other waste products.
Cerebral autoregulation refers to the physiological mechanisms that maintain blood flow at an appropriate level during changes in blood pressure. However, due to the important influences of arterial carbon dioxide levels, cerebral metabolic rate, neural activation, activity of the sympathetic nervous system, posture, as well as other physiological variables, cerebral autoregulation is often interpreted as encompassing the wider field of cerebral blood flow regulation. This field includes areas such as CO2 reactivity, neurovascular coupling and other aspects of cerebral haemodynamics.
This regulation of cerebral blood flow is achieved primarily by small arteries, arterioles, which either dilate or contract under the influence of multiple complex physiological control systems. Impairment of these systems may occur e.g. following stroke, trauma or anaesthesia, in premature babies and has been implicated in the development of subsequent brain injury. The non-invasive measurement of relevant physiological signals like cerebral blood flow, intracranial pressure, blood pressure, CO2 levels, cerebral oxygen consumption, etc. is challenging. Even more so is the subsequent assessment of the control systems. Much remains unknown about the physiology of blood flow control and the best clinical interventions to optimize patient outcome.
Physiological mechanisms
Th
Document 2:::
The preBötzinger complex, often abbreviated as preBötC, is a functionally and anatomically specialized site in the ventral-lateral region of the lower medulla oblongata (i.e., lower brainstem). The preBötC is part of the ventral respiratory group of respiratory related interneurons. Its foremost function is to generate the inspiratory breathing rhythm in mammals. In addition, the preBötC is widely and paucisynaptically connected to higher brain centers that regulate arousal and excitability more generally such that respiratory brain function is intimately connected with many other rhythmic and cognitive functions of the brain and central nervous system. Further, the preBötC receives mechanical sensory information from the airways that encode lung volume as well as pH, oxygen, and carbon dioxide content of circulating blood and the cerebrospinal fluid.
The preBötC is approximately colocated with the hypoglossal (XII) cranial motor nucleus as well as the ‘loop’ portion of the inferior olive in the anterior-posterior axis. The caudal border of the preBötC is slightly caudal to the obex, where the brainstem merges with the cervical spinal cord.
Discovery
The initial description of the preBötC was widely disseminated in a 1991 paper in Science, but its discovery predates that paper by one year. The team was led by Jack L. Feldman and Jeffrey C. Smith at the University of California, Los Angeles (UCLA), but the Science paper also included UCLA coauthor Howard Ellenberger, as well as Klaus Ballanyi and Diethelm W. Richter from Göttingen University in Germany. The region derives its name from a neighboring medullary region involved in expiratory breathing rhythm dubbed Bötzinger complex, which was named after the Silvaner (Bötzinger) variety of wine, featured at the conference at which that region was named (click here to hear a BBC interview with Jack Feldman on the topic of Bötzinger / preBötzinger nomenclature).
Functional definition of the preBötC
The first defini
Document 3:::
When we sleep, our breathing changes due to normal biological processes that affect both our respiratory and muscular systems.
Physiology
Sleep Onset
Breathing changes as we transition from wakefulness to sleep. These changes arise due to biological changes in the processes that regulate our breathing. When we fall asleep, minute ventilation (the amount of air that we breathe per minute) reduces due to decreased metabolism.
Non-REM (NREM) Sleep
During NREM sleep, we move through three sleep stages, with each progressively deeper than the last. As our sleep deepens, our minute ventilation continues to decrease, reducing by 13% in the second NREM stage and by 15% in the third. For example, a study of 19 healthy adults revealed that the minute ventilation in NREM sleep was 7.18 liters/minute compared to 7.66 liters/minute when awake.
Ribcage & Abdominal Muscle Contributions
Rib cage contribution to ventilation increases during NREM sleep, mostly by lateral movement, and is detected by an increase in EMG amplitude during breathing. Diaphragm activity is little increased or unchanged and abdominal muscle activity is slightly increased during these sleep stages.
Upper Airway Resistance
Airway resistance increases by about 230% during NREM sleep. Elastic and flow resistive properties of the lung do not change during NREM sleep. The increase in resistance comes primarily from the upper airway in the retro-epiglottic region. Tonic activity of the pharyngeal dilator muscles of the upper airway decreases during the NREM sleep, contributing to the increased resistance, which is reflected in increased esophageal pressure swings during sleep. The other ventilatory muscles compensate for the increased resistance, and so the airflow decreases much less than the increase in resistance.
Arterial Blood Gases
The Arterial blood gasses pCO2 increases by 3-7mmHg, pO2 drops by 3-9mmHg and SaO2 drops by 2% or less. These changes occur despite a reduced metabolic rate, reflected by a
Document 4:::
The respiratory rate is the rate at which breathing occurs; it is set and controlled by the respiratory center of the brain. A person's respiratory rate is usually measured in breaths per minute.
Measurement
The respiratory rate in humans is measured by counting the number of breaths for one minute through counting how many times the chest rises. A fibre-optic breath rate sensor can be used for monitoring patients during a magnetic resonance imaging scan. Respiration rates may increase with fever, illness, or other medical conditions.
Inaccuracies in respiratory measurement have been reported in the literature. One study compared respiratory rate counted using a 90-second count period, to a full minute, and found significant differences in the rates.. Another study found that rapid respiratory rates in babies, counted using a stethoscope, were 60–80% higher than those counted from beside the cot without the aid of the stethoscope. Similar results are seen with animals when they are being handled and not being handled—the invasiveness of touch apparently is enough to make significant changes in breathing.
Various other methods to measure respiratory rate are commonly used, including impedance pneumography, and capnography which are commonly implemented in patient monitoring. In addition, novel techniques for automatically monitoring respiratory rate using wearable sensors are in development, such as estimation of respiratory rate from the electrocardiogram, photoplethysmogram, or accelerometry signals.
Breathing rate is often interchanged with the term breathing frequency. However, this should not be considered the frequency of breathing because realistic breathing signal is composed of many frequencies.
Normal range
For humans, the typical respiratory rate for a healthy adult at rest is 12–15 breaths per minute. The respiratory center sets the quiet respiratory rhythm at around two seconds for an inhalation and three seconds exhalation. This gives the lower
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which part of the brain regulates the rate of breathing?
A. brain uptake
B. brain stem
C. brain stem
D. brain charge
Answer:
|
|
sciq-438
|
multiple_choice
|
What temperature scale is obtained by adding 273 degrees from the corresponding celsius temperature?
|
[
"ph scale",
"kelvin scale",
"whittle scale",
"seismic scale"
] |
B
|
Relavent Documents:
Document 0:::
The degree Celsius is the unit of temperature on the Celsius scale (originally known as the centigrade scale outside Sweden), one of two temperature scales used in the International System of Units (SI), the other being the Kelvin scale. The degree Celsius (symbol: °C) can refer to a specific temperature on the Celsius scale or a unit to indicate a difference or range between two temperatures. It is named after the Swedish astronomer Anders Celsius (1701–1744), who developed a variant of it in 1742. The unit was called centigrade in several languages (from the Latin centum, which means 100, and gradus, which means steps) for many years. In 1948, the International Committee for Weights and Measures renamed it to honor Celsius and also to remove confusion with the term for one hundredth of a gradian in some languages. Most countries use this scale; the other major scale, Fahrenheit, is still used in the United States, some island territories, and Liberia. The Kelvin scale is of use in the sciences, with representing absolute zero.
Since 1743, the Celsius scale has been based on 0 °C for the freezing point of water and 100 °C for the boiling point of water at 1 atm pressure. Prior to 1743 the values were reversed (i.e. the boiling point was 0 degrees and the freezing point was 100 degrees). The 1743 scale reversal was proposed by Jean-Pierre Christin.
By international agreement, between 1954 and 2019 the unit and the Celsius scale were defined by absolute zero and the triple point of water. After 2007, it was clarified that this definition referred to Vienna Standard Mean Ocean Water (VSMOW), a precisely defined water standard. This definition also precisely related the Celsius scale to the scale of the kelvin, the SI base unit of thermodynamic temperature with symbol K. Absolute zero, the lowest temperature possible, is defined as being exactly 0 K and −273.15 °C. Until 19 May 2019, the temperature of the triple point of water was defined as exactly .
On 20 May
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Temperature is a physical quantity that expresses quantitatively the attribute of hotness or coldness. Temperature is measured with a thermometer. It reflects the kinetic energy of the vibrating and colliding atoms making up a substance.
Thermometers are calibrated in various temperature scales that historically have relied on various reference points and thermometric substances for definition. The most common scales are the Celsius scale with the unit symbol °C (formerly called centigrade), the Fahrenheit scale (°F), and the Kelvin scale (K), the latter being used predominantly for scientific purposes. The kelvin is one of the seven base units in the International System of Units (SI).
Absolute zero, i.e., zero kelvin or −273.15 °C, is the lowest point in the thermodynamic temperature scale. Experimentally, it can be approached very closely but not actually reached, as recognized in the third law of thermodynamics. It would be impossible to extract energy as heat from a body at that temperature.
Temperature is important in all fields of natural science, including physics, chemistry, Earth science, astronomy, medicine, biology, ecology, material science, metallurgy, mechanical engineering and geography as well as most aspects of daily life.
Effects
Many physical processes are related to temperature; some of them are given below:
the physical properties of materials including the phase (solid, liquid, gaseous or plasma), density, solubility, vapor pressure, electrical conductivity, hardness, wear resistance, thermal conductivity, corrosion resistance, strength
the rate and extent to which chemical reactions occur
the amount and properties of thermal radiation emitted from the surface of an object
air temperature affects all living organisms
the speed of sound, which in a gas is proportional to the square root of the absolute temperature
Scales
Temperature scales need two values for definition: the point chosen as zero degrees and the magnitudes of the incr
Document 3:::
The kelvin, symbol K, is a unit of measurement for temperature. The Kelvin scale is an absolute scale, which is defined such that 0 K is absolute zero and a change of thermodynamic temperature by 1 kelvin corresponds to a change of thermal energy by . The Boltzmann constant was exactly defined in the 2019 redefinition of the SI base units such that the triple point of water is . The kelvin is the base unit of temperature in the International System of Units (SI), used alongside its prefixed forms. It is named after the Belfast-born and University of Glasgow-based engineer and physicist William Thomson, 1st Baron Kelvin (1824–1907).
Historically, the Kelvin scale was developed from the Celsius scale, such that 273.15 K was 0 °C (the approximate melting point of ice) and a change of one kelvin was exactly equal to a change of one degree Celsius. This relationship remains accurate, but the Celsius, Fahrenheit, and Rankine scales are now defined in terms of the Kelvin scale. The kelvin is the primary unit of temperature for engineering and the physical sciences, while in most countries the Celsius scale remains the dominant scale outside of these fields. In the United States, outside of the physical sciences, the Fahrenheit scale predominates, with the kelvin or Rankine scale employed for absolute temperature.
History
Precursors
During the 18th century, multiple temperature scales were developed, notably Fahrenheit and centigrade (later Celsius). These scales predated much of the modern science of thermodynamics, including atomic theory and the kinetic theory of gases which underpin the concept of absolute zero. Instead, they chose defining points within the range of human experience that could be reproduced easily and with reasonable accuracy, but lacked any deep significance in thermal physics. In the case of the Celsius scale (and the long since defunct Newton scale and Réaumur scale) the melting point of water served as such a starting point, with Celsius be
Document 4:::
273 (two hundred [and] seventy-three) is the natural number following 272 and preceding 274.
273 is a sphenic number, a truncated triangular pyramid number and an idoneal number.
There are 273 different ternary trees with five nodes.
In other fields
The zero of the Celsius temperature scale is (to the nearest whole number) 273 kelvins. Thus, absolute zero (0 K) is approximately −273 °C. The freezing temperature of water and the thermodynamic temperature of the triple point of water are both approximately 0 °C or 273 K.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What temperature scale is obtained by adding 273 degrees from the corresponding celsius temperature?
A. ph scale
B. kelvin scale
C. whittle scale
D. seismic scale
Answer:
|
|
sciq-11271
|
multiple_choice
|
An rna codon reading guu encodes for what?
|
[
"arginine",
"carbon",
"glycine",
"valine"
] |
D
|
Relavent Documents:
Document 0:::
Genomic deoxyribonucleic acid (abbreviated as gDNA) is chromosomal DNA, in contrast to extra-chromosomal DNAs like plasmids. Most organisms have the same genomic DNA in every cell; however, only certain genes are active in each cell to allow for cell function and differentiation within the body.
The genome of an organism (encoded by the genomic DNA) is the (biological) information of heredity which is passed from one generation of organism to the next. That genome is transcribed to produce various RNAs, which are necessary for the function of the organism. Precursor mRNA (pre-mRNA) is transcribed by RNA polymerase II in the nucleus. pre-mRNA is then processed by splicing to remove introns, leaving the exons in the mature messenger RNA (mRNA). Additional processing includes the addition of a 5' cap and a poly(A) tail to the pre-mRNA. The mature mRNA may then be transported to the cytosol and translated by the ribosome into a protein. Other types of RNA include ribosomal RNA (rRNA) and transfer RNA (tRNA). These types are transcribed by RNA polymerase I and RNA polymerase III, respectively, and are essential for protein synthesis. However 5s rRNA is the only rRNA which is transcribed by RNA Polymerase III.
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
The Actino-ugpB RNA motif is a conserved RNA structure that was discovered by bioinformatics.
Actino-ugpB motifs are found in strains of the species Gardnerella vaginalis, within the phylum Actinomycetota.
It is ambiguous whether Actino-ugpB RNAs function as cis-regulatory elements or whether they operate in trans. Many of the RNAs are upstream of the gene 'ugpB', which encodes a protein putatively involved in sugar transport. However, several of the RNAs are not located upstream of a protein-coding gene. Structurally, the motif consists of two hairpins with conserved nucleotides located in the stems and outside of the hairpins, but not in their terminal loops.
Document 3:::
This list of RNA structure prediction software is a compilation of software tools and web portals used for RNA structure prediction.
Single sequence secondary structure prediction.
Single sequence tertiary structure prediction
Comparative methods
The single sequence methods mentioned above have a difficult job detecting a small sample of reasonable secondary structures from a large space of possible structures. A good way to reduce the size of the space is to use evolutionary approaches. Structures that have been conserved by evolution are far more likely to be the functional form. The methods below use this approach.
RNA solvent accessibility prediction
Intermolecular interactions: RNA-RNA
Many ncRNAs function by binding to other RNAs. For example, miRNAs regulate protein coding gene expression by binding to 3' UTRs, small nucleolar RNAs guide post-transcriptional modifications by binding to rRNA, U4 spliceosomal RNA and U6 spliceosomal RNA bind to each other forming part of the spliceosome and many small bacterial RNAs regulate gene expression by antisense interactions E.g. GcvB, OxyS and RyhB.
Intermolecular interactions: MicroRNA:any RNA
The below table includes interactions that are not limited to UTRs.
Intermolecular interactions: MicroRNA:UTR
MicroRNAs regulate protein coding gene expression by binding to 3' UTRs, there are tools specifically designed for predicting these interactions. For an evaluation of target prediction methods on high-throughput experimental data see (Baek et al., Nature 2008), (Alexiou et al., Bioinformatics 2009), or (Ritchie et al., Nature Methods 2009)
ncRNA gene prediction software
Family specific gene prediction software
RNA homology search software
Benchmarks
Alignment viewers, editors
Inverse folding, RNA design
Notes
Secondary structure viewers, editors
See also
RNA
Non-coding RNA
RNA structure
Comparison of nucleic acid simulation software
Comparison of software for molecular mechanics modeling
Document 4:::
A gene product is the biochemical material, either RNA or protein, resulting from expression of a gene. A measurement of the amount of gene product is sometimes used to infer how active a gene is. Abnormal amounts of gene product can be correlated with disease-causing alleles, such as the overactivity of oncogenes which can cause cancer.
A gene is defined as "a hereditary unit of DNA that is required to produce a functional product". Regulatory elements include:
Promoter region
TATA box
Polyadenylation sequences
Enhancers
These elements work in combination with the open reading frame to create a functional product. This product may be transcribed and be functional as RNA or is translated from mRNA to a protein to be functional in the cell.
RNA products
RNA molecules that do not code for any proteins still maintain a function in the cell. The function of the RNA depends on its classification. These roles include:
aiding protein synthesis
catalyzing reactions
regulating various processes.
Protein synthesis is aided by functional RNA molecules such as tRNA, which helps add the correct amino acid to a polypeptide chain during translation, rRNA, a major component of ribosomes (which guide protein synthesis), as well as mRNA which carry the instructions for creating the protein product.
One type of functional RNA involved in regulation are microRNA (miRNA), which works by repressing translation. These miRNAs work by binding to a complementary target mRNA sequence to prevent translation from occurring. Short-interfering RNA (siRNA) also work by negative regulation of transcription. These siRNA molecules work in RNA-induced silencing complex (RISC) during RNA interference by binding to a target DNA sequence to prevent transcription of a specific mRNA.
Protein products
Proteins are the product of a gene that are formed from translation of a mature mRNA molecule. Proteins contain 4 elements in regards to their structure: primary, secondary, tertiary and quaternary.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An rna codon reading guu encodes for what?
A. arginine
B. carbon
C. glycine
D. valine
Answer:
|
|
sciq-4050
|
multiple_choice
|
What hits the eardrum and causes it to vibrate?
|
[
"decibels",
"cilia",
"sound waves",
"microwaves"
] |
C
|
Relavent Documents:
Document 0:::
The endocochlear potential (EP; also called endolymphatic potential) is the positive voltage of 80-100mV seen in the cochlear endolymphatic spaces. Within the cochlea the EP varies in the magnitude all along its length. When a sound is presented, the endocochlear potential changes either positive or negative in the endolymph, depending on the stimulus. The change in the potential is called the summating potential.
With the movement of the basilar membrane, a shear force is created and a small potential is generated due to a difference in potential between the endolymph (scala media, +80 mV) and the perilymph (vestibular and tympanic ducts, 0 mV). EP is highest in the basal turn of the cochlea (95 mV in mice) and decreases in the magnitude towards the apex (87 mV). In saccule and utricle, endolymphatic potential is about +9 mV and +3mV in the semicircular canal. EP is highly dependent on the metabolism and ionic transport.
An acoustic stimulus produces a simultaneous change in conductance at the membrane of the receptor cell. Because there is a steep gradient (150 mV), changes in membrane conductance are accompanied by rapid influx and efflux of ions which in turn produce the receptor potential. This is known as the Battery Hypothesis. The receptor potential for each hair cell causes a release of neurotransmitter at its basal pole, which elicits excitation of the afferent nerve fibres.
Anatomy
Document 1:::
Earwax, also known by the medical term cerumen, is a waxy substance secreted in the ear canal of humans and other mammals. Earwax can be many colors, including brown, orange, red, yellowish, and gray. Earwax protects the skin of the human ear canal, assists in cleaning and lubrication, and provides protection against bacteria, fungi, particulate matter, and water.
Major components of earwax include cerumen, produced by a type of modified sweat gland, and sebum, an oily substance. Both components are made by glands located in the outer ear canal. The chemical composition of earwax includes long chain fatty acids, both saturated and unsaturated, alcohols, squalene, and cholesterol. Earwax also contains dead skin cells and hair.
Excess or compacted cerumen is the buildup of ear wax causing a blockage in the ear canal and it can press against the eardrum or block the outside ear canal or hearing aids, potentially causing hearing loss.
Physiology
Cerumen is produced in the cartilaginous outer third portion of the ear canal. It is a mixture of secretions from sebaceous glands and less-viscous ones from modified apocrine sweat glands. The primary components of both wet and dry earwax are shed layers of skin, with, on average, 60% of the earwax consisting of keratin, 12–20% saturated and unsaturated long-chain fatty acids, alcohols, squalene and 6–9% cholesterol.
Wet or dry
There are two genetically-determined types of earwax: the wet type, which is dominant, and the dry type, which is recessive. This distinction is caused by a single base change in the "ATP-binding cassette C11 gene". Dry-type individuals are homozygous for adenine (AA) whereas wet-type requires at least one guanine (AG or GG). Dry earwax is gray or tan and brittle, and is about 20% lipid. It has a smaller concentration of lipid and pigment granules than wet earwax. Wet earwax is light brown or dark brown and has a viscous and sticky consistency, and is about 50% lipid. Wet-type earwax is associated
Document 2:::
An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health.
Broadly speaking, there are two types of otoacoustic emissions: spontaneous otoacoustic emissions (SOAEs), which occur without external stimulation, and evoked otoacoustic emissions (EOAEs), which require an evoking stimulus.
Mechanism of occurrence
OAEs are considered to be related to the amplification function of the cochlea. In the absence of external stimulation, the activity of the cochlear amplifier increases, leading to the production of sound. Several lines of evidence suggest that, in mammals, outer hair cells are the elements that enhance cochlear sensitivity and frequency selectivity and hence act as the energy sources for amplification.
Types
Spontaneous
Spontaneous otoacoustic emissions (SOAEs) are sounds that are emitted from the ear without external stimulation and are measurable with sensitive microphones in the external ear canal. At least one SOAE can be detected in approximately 35–50% of the population. The sounds are frequency-stable between 500 Hz and 4,500 Hz and have unstable volumes between -30 dB SPL and +10 dB SPL. The majority of those with SOAEs are unaware of them, however 1–9% perceive a SOAE as an annoying tinnitus. It has been suggested that "The Hum" phenomena are SOAEs.
Evoked
Evoked otoacoustic emissions are currently evoked using three different methodologies.
Stimulus-frequency OAEs (SFOAEs) are measured during the application of a pure-tone stimulus and are detected by the vec
Document 3:::
Tullio phenomenon, sound-induced vertigo, dizziness, nausea or eye movement (nystagmus) was first described in 1929 by the Italian biologist Prof. Pietro Tullio. (1881–1941) During his experiments on pigeons, Tullio discovered that by drilling tiny holes in the semicircular canals of his subjects, he could subsequently cause them balance problems when exposed to sound.
The cause is usually a fistula in the middle or inner ear, allowing abnormal sound-synchronized pressure changes in the balance organs. Such an opening may be caused by a barotrauma (e.g. incurred when diving or flying), or may be a side effect of fenestration surgery, syphilis or Lyme disease.
Patients with this disorder may also experience vertigo, imbalance and eye movement set off by changes in pressure, e.g. when nose-blowing, swallowing or when lifting heavy objects.
Tullio phenomenon is also one of the common symptoms of superior canal dehiscence syndrome (SCDS), first diagnosed in 1998 by Dr. Lloyd B. Minor, Johns Hopkins University, Baltimore, United States.
Document 4:::
A middle ear implant is a hearing device that is surgically implanted into the middle ear. They help people with conductive, sensorineural or mixed hearing loss to hear.
Middle ear implants work by improving the conduction of sound vibrations from the middle ear to the inner ear. There are two types of middle ear devices: active and passive. Active middle ear implants (AMEI) consist of an external audio processor and an internal implant, which actively vibrates the structures of the middle ear. Passive middle ear implants (PMEIs) are sometimes known as ossicular replacement prostheses, TORPs or PORPs. They replace damaged or missing parts of the middle ear, creating a bridge between the outer ear and the inner ear, so that sound vibrations can be conducted through the middle ear and on to the cochlea. Unlike AMEIs, PMEIs contain no electronics and are not powered by an external source.
PMEIs are the usual first-line surgical treatment for conductive hearing loss, due to their lack of external components and cost-effectiveness. However, each patient is assessed individually as to whether an AMEI or PMEI would bring more benefit. This is especially true if the patient has already had several surgeries with PMEIs.
Active middle ear implant
Parts
An active middle ear implant (AMEI) has two parts: an internal implant and an external audio processor. The microphone of the audio processor picks up sounds from the environment. The processor then converts these acoustic signals into digital signals and sends them to the implant through the skin. The implant sends the signals to the Floating Mass Transducer (FMT): a small vibratory part that is surgically fixed either on one of the three ossicles or against the round window of the cochlea. The FMT vibrates and sends sound vibrations to the cochlea. The cochlea converts these vibrations into nerve signals and sends them to the brain, where they are interpreted as sound.
Indications
AMEIs are intended for patients wit
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What hits the eardrum and causes it to vibrate?
A. decibels
B. cilia
C. sound waves
D. microwaves
Answer:
|
|
sciq-5050
|
multiple_choice
|
How do prokaryotic organisms reproduce asexually?
|
[
"binary fission",
"kinetic fission",
"residual fission",
"mitosis"
] |
A
|
Relavent Documents:
Document 0:::
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid ce
Document 1:::
Apicomplexans, a group of intracellular parasites, have life cycle stages that allow them to survive the wide variety of environments they are exposed to during their complex life cycle. Each stage in the life cycle of an apicomplexan organism is typified by a cellular variety with a distinct morphology and biochemistry.
Not all apicomplexa develop all the following cellular varieties and division methods. This presentation is intended as an outline of a hypothetical generalised apicomplexan organism.
Methods of asexual replication
Apicomplexans (sporozoans) replicate via ways of multiple fission (also known as schizogony). These ways include , and , although the latter is sometimes referred to as schizogony, despite its general meaning.
Merogony is an asexually reproductive process of apicomplexa. After infecting a host cell, a trophozoite (see glossary below) increases in size while repeatedly replicating its nucleus and other organelles. During this process, the organism is known as a or . Cytokinesis next subdivides the multinucleated schizont into numerous identical daughter cells called merozoites (see glossary below), which are released into the blood when the host cell ruptures. Organisms whose life cycles rely on this process include Theileria, Babesia, Plasmodium, and Toxoplasma gondii.
Sporogony is a type of sexual and asexual reproduction. It involves karyogamy, the formation of a zygote, which is followed by meiosis and multiple fission. This results in the production of sporozoites.
Other forms of replication include and .
Endodyogeny is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation.
Endopolygeny is the division into several organisms at once by internal budding.
Glossary of cell types
Infectious stages
A (ancient Greek , seed + , animal) is th
Document 2:::
Autogamy, or self-fertilization, refers to the fusion of two gametes that come from one individual. Autogamy is predominantly observed in the form of self-pollination, a reproductive mechanism employed by many flowering plants. However, species of protists have also been observed using autogamy as a means of reproduction. Flowering plants engage in autogamy regularly, while the protists that engage in autogamy only do so in stressful environments.
Occurrence
Protists
Paramecium aurelia
Paramecium aurelia is the most commonly studied protozoan for autogamy. Similar to other unicellular organisms, Paramecium aurelia typically reproduce asexually via binary fission or sexually via cross-fertilization. However, studies have shown that when put under nutritional stress, Paramecium aurelia will undergo meiosis and subsequent fusion of gametic-like nuclei. This process, defined as hemixis, a chromosomal rearrangement process, takes place in a number of steps. First, the two micronuclei of P. aurelia enlarge and divide two times to form eight nuclei. Some of these daughter nuclei will continue to divide to create potential future gametic nuclei. Of these potential gametic nuclei, one will divide two more times. Of the four daughter nuclei arising from this step, two of them become anlagen, or cells that will form part of the new organism. The other two daughter nuclei become the gametic micronuclei that will undergo autogamous self-fertilization. These nuclear divisions are observed mainly when the P. aurelia is put under nutritional stress. Research shows that P. aurelia undergo autogamy synchronously with other individuals of the same species.
Clonal aging and rejuvenation
In Paramecium tetraurelia, vitality declines over the course of successive asexual cell divisions by binary fission. Clonal aging is associated with a dramatic increase in DNA damage. When paramecia that have experienced clonal aging undergo meiosis, either during conjugation or automixis, the old
Document 3:::
Fungi are a diverse group of organisms that employ a huge variety of reproductive strategies, ranging from fully asexual to almost exclusively sexual species. Most species can reproduce both sexually and asexually, alternating between haploid and diploid forms. This contrasts with many eukaryotes such as mammals, where the adults are always diploid and produce haploid gametes which combine to form the next generation. In fungi, both haploid and diploid forms can reproduce – haploid individuals can undergo asexual reproduction while diploid forms can produce gametes that combine to give rise to the next generation.
Mating in fungi is a complex process governed by mating types. Research on fungal mating has focused on several model species with different behaviour. Not all fungi reproduce sexually and many that do are isogamous; thus, for many members of the fungal kingdom, the terms "male" and "female" do not apply. Homothallic species are able to mate with themselves, while in heterothallic species only isolates of opposite mating types can mate.
Mating between isogamous fungi may consist only of a transfer of a nucleus from one cell to another. Vegetative incompatibility within species often prevents a fungal isolate from mating with another isolate. Isolates of the same incompatibility group do not mate or mating does not lead to successful offspring. High variation has been reported including same-chemotype mating, sporophyte to gametophyte mating and biparental transfer of mitochondria.
Mating in Zygomycota
A zygomycete hypha grows towards a compatible mate and they both form a bridge, called a progametangia, by joining at the hyphal tips via plasmogamy. A pair of septa forms around the merged tips, enclosing nuclei from both isolates. A second pair of septa forms two adjacent cells, one on each side. These adjacent cells, called suspensors provide structural support. The central cell, called the zygosporangium, is destined to become a spore. The zygosporang
Document 4:::
Mating types are the microorganism equivalent to sexes in multicellular lifeforms and are thought to be the ancestor to distinct sexes. They also occur in macro-organisms such as fungi.
Definition
Mating types are the microorganism equivalent to sex in higher organisms and occur in isogamous and anisogamous species. Depending on the group, different mating types are often referred to by numbers, letters, or simply "+" and "−" instead of "male" and "female", which refer to "sexes" or differences in size between gametes. Syngamy can only take place between gametes carrying different mating types.
Occurrence
Reproduction by mating types is especially prevalent in fungi. Filamentous ascomycetes usually have two mating types referred to as "MAT1-1" and "MAT1-2", following the yeast mating-type locus (MAT). Under standard nomenclature, MAT1-1 (which may informally be called MAT1) encodes for a regulatory protein with an alpha box motif, while MAT1-2 (informally called MAT2) encodes for a protein with a high motility-group (HMG) DNA-binding motif, as in the yeast mating type MATα1. The corresponding mating types in yeast, a non-filamentous ascomycete, are referred to as MATa and MATα.
Mating type genes in ascomycetes are called idiomorphs rather than alleles due to the uncertainty of the origin by common descent. The proteins they encode are transcription factors which regulate both the early and late stages of the sexual cycle. Heterothallic ascomycetes produce gametes, which present a single Mat idiomorph, and syngamy will only be possible between gametes carrying complementary mating types. On the other hand, homothallic ascomycetes produce gametes that can fuse with every other gamete in the population (including its own mitotic descendants) most often because each haploid contains the two alternate forms of the Mat locus in its genome.
Basidiomycetes can have thousands of different mating types.
In the ascomycete Neurospora crassa matings are restricted to intera
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do prokaryotic organisms reproduce asexually?
A. binary fission
B. kinetic fission
C. residual fission
D. mitosis
Answer:
|
|
sciq-236
|
multiple_choice
|
What do the letters in our blood types represent?
|
[
"genomes",
"proteins",
"alleles",
"iron levels"
] |
C
|
Relavent Documents:
Document 0:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 1:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do the letters in our blood types represent?
A. genomes
B. proteins
C. alleles
D. iron levels
Answer:
|
|
sciq-235
|
multiple_choice
|
Something that has all of the characteristics of life is considered to be what?
|
[
"ecosystem",
"alive",
"molecule",
"organism"
] |
B
|
Relavent Documents:
Document 0:::
Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments.
Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them.
Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment.
History
The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti
Document 1:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 2:::
The Seven Pillars of Life are the essential principles of life described by Daniel E. Koshland in 2002 in order to create a universal definition of life. One stated goal of this universal definition is to aid in understanding and identifying artificial and extraterrestrial life. The seven pillars are Program, Improvisation, Compartmentalization, Energy, Regeneration, Adaptability, and Seclusion. These can be abbreviated as PICERAS.
The Seven Pillars
Program
Koshland defines "Program" as an "organized plan that describes both the ingredients themselves and the kinetics of the interactions among ingredients as the living system persists through time." In natural life as it is known on Earth, the program operates through the mechanisms of nucleic acids and amino acids, but the concept of program can apply to other imagined or undiscovered mechanisms.
Improvisation
"Improvisation" refers to the living system's ability to change its program in response to the larger environment in which it exists. An example of improvisation on earth is natural selection.
Compartmentalization
"Compartmentalization" refers to the separation of spaces in the living system that allow for separate environments for necessary chemical processes. Compartmentalization is necessary to protect the concentration of the ingredients for a reaction from outside environments.
Energy
Because living systems involve net movement in terms of chemical movement or body movement, and lose energy in those movements through entropy, energy is required for a living system to exist. The main source of energy on Earth is the sun, but other sources of energy exist for life on Earth, such as hydrogen gas or methane, used in chemosynthesis.
Regeneration
"Regeneration" in a living system refers to the general compensation for losses and degradation in the various components and processes in the system. This covers the thermodynamic loss in chemical reactions, the wear and tear of larger parts, and the large
Document 3:::
A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts.
Types
In general, biosignatures can be grouped into ten broad categories:
Isotope patterns: Isotopic evidence or patterns that require biological processes.
Chemistry: Chemical features that require biological activity.
Organic matter: Organics formed by biological processes.
Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite).
Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films.
Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms.
Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence.
Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely.
Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale.
Technosignatures: Signatures that indicate a technologically advanced civilization.
Viability
Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat
Document 4:::
Microbial ecology (or environmental microbiology) is the ecology of microorganisms: their relationship with one another and with their environment. It concerns the three major domains of life—Eukaryota, Archaea, and Bacteria—as well as viruses.
Microorganisms, by their omnipresence, impact the entire biosphere. Microbial life plays a primary role in regulating biogeochemical systems in virtually all of our planet's environments, including some of the most extreme, from frozen environments and acidic lakes, to hydrothermal vents at the bottom of deepest oceans, and some of the most familiar, such as the human small intestine, nose, and mouth. As a consequence of the quantitative magnitude of microbial life (calculated as cells; eight orders of magnitude greater than the number of stars in the observable universe) microbes, by virtue of their biomass alone, constitute a significant carbon sink. Aside from carbon fixation, microorganisms' key collective metabolic processes (including nitrogen fixation, methane metabolism, and sulfur metabolism) control global biogeochemical cycling. The immensity of microorganisms' production is such that, even in the total absence of eukaryotic life, these processes would likely continue unchanged.
History
While microbes have been studied since the seventeenth-century, this research was from a primarily physiological perspective rather than an ecological one. For instance, Louis Pasteur and his disciples were interested in the problem of microbial distribution both on land and in the ocean. Martinus Beijerinck invented the enrichment culture, a fundamental method of studying microbes from the environment. He is often incorrectly credited with framing the microbial biogeographic idea that "everything is everywhere, but, the environment selects", which was stated by Lourens Baas Becking. Sergei Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context—making him among the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Something that has all of the characteristics of life is considered to be what?
A. ecosystem
B. alive
C. molecule
D. organism
Answer:
|
|
sciq-3379
|
multiple_choice
|
What is given off from plants and taken in by animals?
|
[
"methane",
"oxygen",
"nitrogen",
"sulfur"
] |
B
|
Relavent Documents:
Document 0:::
A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host.
Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins.
A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi
Document 1:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 2:::
Microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements.
Decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses.
The decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. Plant material decomposition is also often referred to as composting. The decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy.
Decomposition microbiology of plant materials
The breakdown of vegetation is highly dependent on oxygen and moisture levels. During decomposition, microorganisms require oxygen for their respiration. If anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. Appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. In arid environments, bacteria and fungi dry out and are unable to take part in decomposition. In wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. Decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. This usually translates to having appropriate carbon to nitrogen ratios (C:N). The ideal composting carbon-to-nitrogen ratio is thought to be approximately 30:1. As in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. For example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities.
Decomposition mi
Document 3:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 4:::
Secondary metabolites, also called specialised metabolites, toxins, secondary products, or natural products, are organic compounds produced by any lifeform, e.g. bacteria, fungi, animals, or plants, which are not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. Specific secondary metabolites are often restricted to a narrow set of species within a phylogenetic group. Secondary metabolites often play an important role in plant defense against herbivory and other interspecies defenses. Humans use secondary metabolites as medicines, flavourings, pigments, and recreational drugs.
The term secondary metabolite was first coined by Albrecht Kossel, the 1910 Nobel Prize laureate for medicine and physiology. 30 years later a Polish botanist Friedrich Czapek described secondary metabolites as end products of nitrogen metabolism.
Secondary metabolites commonly mediate antagonistic interactions, such as competition and predation, as well as mutualistic ones such as pollination and resource mutualisms. Usually, secondary metabolites are confined to a specific lineage or even species, though there is considerable evidence that horizontal transfer across species or genera of entire pathways plays an important role in bacterial (and, likely, fungal) evolution. Research also shows that secondary metabolism can affect different species in varying ways. In the same forest, four separate species of arboreal marsupial folivores reacted differently to a secondary metabolite in eucalypts. This shows that differing types of secondary metabolites can be the split between two herbivore ecological niches. Additionally, certain species evolve to resist secondary metabolites and even use them for their own benefit. For example, monarch butterflies have evolved to be able to eat milkweed (Asclepias)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is given off from plants and taken in by animals?
A. methane
B. oxygen
C. nitrogen
D. sulfur
Answer:
|
|
sciq-10595
|
multiple_choice
|
What are formed by atoms gaining electrons?
|
[
"ions",
"cations",
"oxides",
"anions"
] |
D
|
Relavent Documents:
Document 0:::
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of
Document 1:::
An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to
Document 2:::
In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts.
In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects.
In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae.
General chemistry
In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism.
The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture.
Analytical chemistry
In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which
have soluble chlorides; and
are not precipitated
Document 3:::
In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current.
The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign.
In conductors
In conducting media, particles serve to carry charge:
In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes.
In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.
In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.
In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil
Document 4:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are formed by atoms gaining electrons?
A. ions
B. cations
C. oxides
D. anions
Answer:
|
|
sciq-4176
|
multiple_choice
|
What is calculated by adding together the atomic masses of the elements in the substance, each multiplied by its subscript (written or implied) in the molecular formula?
|
[
"fractional mass",
"molecular mass",
"magnetic mass",
"mass effect"
] |
B
|
Relavent Documents:
Document 0:::
The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ).
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused w
Document 1:::
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed.
Units
The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961.
Molecular mass
The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr.
Average mass
The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da.
Mass number
The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons.
Nominal mass
The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon i
Document 2:::
Monoisotopic mass (Mmi) is one of several types of molecular masses used in mass spectrometry. The theoretical monoisotopic mass of a molecule is computed by taking the sum of the accurate masses (including mass defect) of the most abundant naturally occurring stable isotope of each atom in the molecule. For small molecules made up of low atomic number elements the monoisotopic mass is observable as an isotopically pure peak in a mass spectrum. This differs from the nominal molecular mass, which is the sum of the mass number of the primary isotope of each atom in the molecule and is an integer. It also is different from the molar mass, which is a type of average mass. For some atoms like carbon, oxygen, hydrogen, nitrogen, and sulfur, the Mmi of these elements is exactly the same as the mass of its natural isotope, which is the lightest one. However, this does not hold true for all atoms. Iron's most common isotope has a mass number of 56, while the stable isotopes of iron vary in mass number from 54 to 58. Monoisotopic mass is typically expressed in daltons (Da), also called unified atomic mass units (u).
Nominal mass vs monoisotopic mass
Nominal mass
Nominal mass is a term used in high level mass spectrometric discussions, it can be calculated using the mass number of the most abundant isotope of each atom, without regard for the mass defect. For example, when calculating the nominal mass of a molecule of nitrogen (N2) and ethylene (C2H4) it comes out as.
N2
(2*14)= 28 Da
C2H4
(2*12)+(4*1)= 28 Da
What this means, is when using mass spectrometer with insufficient source of power "low resolution" like a quadrupole mass analyser or a quadrupolar ion trap, these two molecules won't be able to be distinguished after ionization, this will be shown by the cross lapping of the m/z peaks. If a high-resolution instrument like an orbitrap or an ion cyclotron resonance is used, these two molecules can be distinguished.
Monoisotopic mass
When calculating
Document 3:::
In chemistry, the molar mass () of a chemical compound is defined as the ratio between the mass and the amount of substance (measured in moles) of any sample of said compound. The molar mass is a bulk, not molecular, property of a substance. The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes. Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth. The molar mass is appropriate for converting between the mass of a substance and the amount of a substance for bulk quantities.
The molecular mass and formula mass are commonly used as a synonym of molar mass, particularly for molecular compounds; however, the most authoritative sources define it differently. The difference is that molecular mass is the mass of one specific particle or molecule, while the molar mass is an average over many particles or molecules.
The formula weight is a synonym of molar mass that is frequently used for non-molecular compounds, such as ionic salts.
The molar mass is an intensive property of the substance, that does not depend on the size of the sample. In the International System of Units (SI), the coherent unit of molar mass is kg/mol. However, for historical reasons, molar masses are almost always expressed in g/mol.
The mole was defined in such a way that the molar mass of a compound, in g/mol, is numerically equal to the average mass of one molecule, in daltons. It was exactly equal before the redefinition of the mole in 2019, and is now only approximately equal, but the difference is negligible for all practical purposes. Thus, for example, the average mass of a molecule of water is about 18.0153 daltons, and the molar mass of water is about 18.0153 g/mol.
For chemical elements without isolated molecules, such as carbon and metals, the molar mass is computed dividi
Document 4:::
The dalton or unified atomic mass unit (symbols: Da or u) is a non-SI unit of mass defined as of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. The atomic mass constant, denoted mu, is defined identically, giving .
This unit is commonly used in physics and chemistry to express the mass of atomic-scale objects, such as atoms, molecules, and elementary particles, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of about . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13.
The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the units kilodalton (kDa) and megadalton (MDa). Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total.
The mole is a unit of amount of substance, widely used in chemistry and physics, which was originally defined so that the mass of one mole of a substance, in grams, would be numerically equal to the average mass of one of its constituent particles, in daltons. That is, the molar mass of a chemical compound was meant to be numerically equal to its average molecular mass. For example, the average mass of one molecule of water is about 18.0153 daltons, and one mole of water is about 18.0153 grams. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for almost all practical purposes, it is now only approximate, because of the 2019 redefin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is calculated by adding together the atomic masses of the elements in the substance, each multiplied by its subscript (written or implied) in the molecular formula?
A. fractional mass
B. molecular mass
C. magnetic mass
D. mass effect
Answer:
|
|
sciq-3753
|
multiple_choice
|
What two measurements are multiplied to find the area of a rectangle?
|
[
"length and width",
"depth and width",
"volume and mass",
"length and depth"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two measurements are multiplied to find the area of a rectangle?
A. length and width
B. depth and width
C. volume and mass
D. length and depth
Answer:
|
|
sciq-3031
|
multiple_choice
|
What are the attractive forces that occur between polar molecules called?
|
[
"induced-dipole forces",
"particle - dipole forces",
"dipole-dipole forces",
"ion-dipole forces"
] |
C
|
Relavent Documents:
Document 0:::
In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end.
Polar molecules must contain one or more polar bonds due to a difference in electronegativity between the bonded atoms. Molecules containing polar bonds have no molecular polarity if the bond dipoles cancel each other out by symmetry.
Polar molecules interact through dipole-dipole intermolecular forces and hydrogen bonds. Polarity underlies a number of physical properties including surface tension, solubility, and melting and boiling points.
Polarity of bonds
Not all atoms attract electrons with the same force. The amount of "pull" an atom exerts on its electrons is called its electronegativity. Atoms with high electronegativitiessuch as fluorine, oxygen, and nitrogenexert a greater pull on electrons than atoms with lower electronegativities such as alkali metals and alkaline earth metals. In a bond, this leads to unequal sharing of electrons between the atoms, as electrons will be drawn closer to the atom with the higher electronegativity.
Because electrons have a negative charge, the unequal sharing of electrons within a bond leads to the formation of an electric dipole: a separation of positive and negative electric charge. Because the amount of charge separated in such dipoles is usually smaller than a fundamental charge, they are called partial charges, denoted as δ+ (delta plus) and δ− (delta minus). These symbols were introduced by Sir Christopher Ingold and Dr. Edith Hilda (Usherwood) Ingold in 1926. The bond dipole moment is calculated by multiplying the amount of charge separated and the distance between the charges.
These dipoles within molecules can interact with dipoles in other molecules, creating dipole-dipole intermolecular forces.
Classification
Bonds can fall between one of two extremescompletely nonpolar or completely polar. A completely nonpolar
Document 1:::
London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds or loosely as van der Waals forces) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. They are part of the van der Waals forces. The LDF is named after the German physicist Fritz London. They are the weakest intermolecular force.
Introduction
The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces), the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant, typically symbolized . For atoms that are located closer together than the wavelength of light, the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant.
While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation like , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials
Document 2:::
After the explanation of van der Waals forces by Fritz London, several scientists soon realised that his definition could be extended from the interaction of two molecules with induced dipoles to macro-scale objects by summing all of the forces between the molecules in each of the bodies involved. The theory is named after H. C. Hamaker, who derived the interaction between two spheres, a sphere and a wall, and presented a general discussion in a heavily cited 1937 paper.
The interaction of two bodies is then treated as the pairwise interaction of a set of N molecules at positions: Ri {i:1,2,... ...,N}. The distance between the molecules i and j is then:
The interaction energy of the system is taken to be:
where is the interaction of molecules i and j in the absence of the influence of other molecules.
The theory is however only an approximation which assumes that the interactions can be treated independently, the theory must also be adjusted to take into account quantum perturbation theory.
Document 3:::
A contact force is any force that occurs as a result of two objects making contact with each other. Contact forces are ubiquitous and are responsible for most visible interactions between macroscopic collections of matter. Pushing a car or kicking a ball are some of the everyday examples where contact forces are at work. In the first case the force is continuously applied to the car by a person, while in the second case the force is delivered in a short impulse.
Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force.
Not all forces are contact forces; for example, the weight of an object is the force between the object and the Earth, even though the two do not need to make contact. Gravitational forces, electrical forces and magnetic forces are body forces and can exist without contact occurring.
Origin of contact forces
The microscopic origin of contact forces is diverse. Normal force is directly a result of Pauli exclusion principle and not a true force per se: Everyday objects do not actually touch each other; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: Cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the ele
Document 4:::
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the attractive forces that occur between polar molecules called?
A. induced-dipole forces
B. particle - dipole forces
C. dipole-dipole forces
D. ion-dipole forces
Answer:
|
|
ai2_arc-585
|
multiple_choice
|
How does a parachute sufficiently increase air resistance to allow the parachutist to land safely?
|
[
"by decreasing the force of gravity acting on the parachutist",
"by decreasing the total mass of the parachutist",
"by increasing the surrounding air pressure around the parachute",
"by increasing the total surface area of the parachute"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mechanics and physics, shock is a sudden acceleration caused, for example, by impact, drop, kick, earthquake, or explosion. Shock is a transient physical excitation.
Shock describes matter subject to extreme rates of force with respect to time. Shock is a vector that has units of an acceleration (rate of change of velocity). The unit g (or g) represents multiples of the standard acceleration of gravity and is conventionally used.
A shock pulse can be characterised by its peak acceleration, the duration, and the shape of the shock pulse (half sine, triangular, trapezoidal, etc.). The shock response spectrum is a method for further evaluating a mechanical shock.
Shock measurement
Shock measurement is of interest in several fields such as
Propagation of heel shock through a runner's body
Measure the magnitude of a shock need to cause damage to an item: fragility.
Measure shock attenuation through athletic flooring
Measuring the effectiveness of a shock absorber
Measuring the shock absorbing ability of package cushioning
Measure the ability of an athletic helmet to protect people
Measure the effectiveness of shock mounts
Determining the ability of structures to resist seismic shock: earthquakes, etc.
Determining whether personal protective fabric attenuates or amplifies shocks
Verifying that a Naval ship and its equipment can survive explosive shocks
Shocks are usually measured by accelerometers but other transducers and high speed imaging are also used. A wide variety of laboratory instrumentation is available; stand-alone shock data loggers are also used.
Field shocks are highly variable and often have very uneven shapes. Even laboratory controlled shocks often have uneven shapes and include short duration spikes; Noise can be reduced by appropriate digital or analog filtering.
Governing test methods and specifications provide detail about the conduct of shock tests. Proper placement of measuring instruments is critical. Fragile items and packaged g
Document 2:::
Diving physics, or the physics of underwater diving is the basic aspects of physics which describe the effects of the underwater environment on the underwater diver and their equipment, and the effects of blending, compressing, and storing breathing gas mixtures, and supplying them for use at ambient pressure. These effects are mostly consequences of immersion in water, the hydrostatic pressure of depth and the effects of pressure and temperature on breathing gases. An understanding of the physics is useful when considering the physiological effects of diving, breathing gas planning and management, diver buoyancy control and trim, and the hazards and risks of diving.
Changes in density of breathing gas affect the ability of the diver to breathe effectively, and variations in partial pressure of breathing gas constituents have profound effects on the health and ability to function underwater of the diver.
Aspects of physics with particular relevance to diving
The main laws of physics that describe the influence of the underwater diving environment on the diver and diving equipment include:
Buoyancy
Archimedes' principle (Buoyancy) - Ignoring the minor effect of surface tension, an object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. Thus, when in water, the weight of the volume of water displaced as compared to the weight of the diver's body and the diver's equipment, determine whether the diver floats or sinks. Buoyancy control, and being able to maintain neutral buoyancy in particular, is an important safety skill. The diver needs to understand buoyancy to effectively and safely operate drysuits, buoyancy compensators, diving weighting systems and lifting bags.
Pressure
The concept of pressure as force distributed over area, and the variation of pressure with immersed depth are central to the understanding of the physiology of diving, particularly the physiology of decompression an
Document 3:::
Additional Mathematics is a qualification in mathematics, commonly taken by students in high-school (or GCSE exam takers in the United Kingdom). It features a range of problems set out in a different format and wider content to the standard Mathematics at the same level.
Additional Mathematics in Singapore
In Singapore, Additional Mathematics is an optional subject offered to pupils in secondary school—specifically those who have an aptitude in Mathematics and are in the Normal (Academic) stream or Express stream. The syllabus covered is more in-depth as compared to Elementary Mathematics, with additional topics including Algebra binomial expansion, proofs in plane geometry, differential calculus and integral calculus. Additional Mathematics is also a prerequisite for students who are intending to offer H2 Mathematics and H2 Further Mathematics at A-level (if they choose to enter a Junior College after secondary school). Students without Additional Mathematics at the 'O' level will usually be offered H1 Mathematics instead.
Examination Format
The syllabus was updated starting with the 2021 batch of candidates. There are two written papers, each comprising half of the weightage towards the subject. Each paper is 2 hours 15 minutes long and worth 90 marks. Paper 1 has 12 to 14 questions, while Paper 2 has 9 to 11 questions. Generally, Paper 2 would have a graph plotting question based on linear law.
GCSE Additional Mathematics in Northern Ireland
In Northern Ireland, Additional Mathematics was offered as a GCSE subject by the local examination board, CCEA. There were two examination papers: one which tested topics in Pure Mathematics, and one which tested topics in Mechanics and Statistics. It was discontinued in 2014 and replaced with GCSE Further Mathematics—a new qualification whose level exceeds both those offered by GCSE Mathematics, and the analogous qualifications offered in England.
Further Maths IGCSE and Additional Maths FSMQ in England
Starting from
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How does a parachute sufficiently increase air resistance to allow the parachutist to land safely?
A. by decreasing the force of gravity acting on the parachutist
B. by decreasing the total mass of the parachutist
C. by increasing the surrounding air pressure around the parachute
D. by increasing the total surface area of the parachute
Answer:
|
|
sciq-6578
|
multiple_choice
|
In an electromagnetic wave, what do the crests and troughs represent?
|
[
"vibrating fields",
"particles fields",
"oscillating fields",
"ocean waves"
] |
C
|
Relavent Documents:
Document 0:::
A crest point on a wave is the maximum value of upward displacement within a cycle. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point in a cycle.
When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude.
See also
Crest factor
Superposition principle
Wave
Document 1:::
A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis.
In the atmospheric sciences, these plots are a common way to visualize atmospheric waves.
In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design.
Origins
In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is:
Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as
On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram:
Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc.
See also
Dispersion relation
Document 2:::
In fluid dynamics, the wave height of a surface wave is the difference between the elevations of a crest and a neighboring trough. Wave height is a term used by mariners, as well as in coastal, ocean and naval engineering.
At sea, the term significant wave height is used as a means to introduce a well-defined and standardized statistic to denote the characteristic height of the random waves in a sea state, including wind sea and swell. It is defined in such a way that it more or less corresponds to what a mariner observes when estimating visually the average wave height.
Definitions
Depending on context, wave height may be defined in different ways:
For a sine wave, the wave height H is twice the amplitude (i.e., the peak-to-peak amplitude):
For a periodic wave, it is simply the difference between the maximum and minimum of the surface elevation : with cp the phase speed (or propagation speed) of the wave. The sine wave is a specific case of a periodic wave.
In random waves at sea, when the surface elevations are measured with a wave buoy, the individual wave height Hm of each individual wave—with an integer label m, running from 1 to N, to denote its position in a sequence of N waves—is the difference in elevation between a wave crest and trough in that wave. For this to be possible, it is necessary to first split the measured time series of the surface elevation into individual waves. Commonly, an individual wave is denoted as the time interval between two successive downward-crossings through the average surface elevation (upward crossings might also be used). Then the individual wave height of each wave is again the difference between maximum and minimum elevation in the time interval of the wave under consideration.
Significant wave height
RMS wave height
Another wave-height statistic in common usage is the root-mean-square (or RMS) wave height Hrms, defined as: with Hm again denoting the individual wave heights in a certain time series.
See also
Se
Document 3:::
The mode of electromagnetic describes the field pattern of the propagating waves. Electromagnetic modes are analogous to the normal modes of vibration in other systems, such as mechanical systems.
Some of the classifications of electromagnetic modes include;
Free space modes
Plane waves, waves in which the electric and magnetic fields are both orthogonal to the direction of travel of the wave. These are the waves that exist in free space far from any antenna.
Modes in waveguides and transmission lines
Transverse modes, modes that have at least one of the electric field and magnetic field entirely in a transverse direction.
Transverse electromagnetic mode (TEM), as with a free space plane wave, both the electric field and magnetic field are entirely transverse.
Transverse electric (TE) modes, only the electric field is entirely transverse. Also notated as H modes to indicate there is a longitudinal magnetic component.
Transverse magnetic (TM) modes, only the magnetic field is entirely transverse. Also notated as E modes to indicate there is a longitudinal electric component.
Hybrid electromagnetic (HEM) modes, both the electric and magnetic fields have a component in the longitudinal direction. They can be analysed as a linear superposition of the corresponding TE and TM modes.
HE modes, hybrid modes in which the TE component dominates.
EH modes, hybrid modes in which the TM component dominates.
Longitudinal-section modes
Longitudinal-section electric (LSE) modes, hybrid modes in which the electric field in one of the transverse directions is zero
Longitudinal-section magnetic (LSM) modes, hybrid modes in which the magnetic field in one of the transverse directions is zero
Modes in other structures
Bloch modes, modes of Bloch waves; these occur in periodically repeating structures.
Mode names are sometimes prefixed with quasi-, meaning that the mode is not quite pure. For instance, quasi-TEM mode has a small component of longitudinal field.
Document 4:::
Longitudinal-section modes are a set of a particular kind of electromagnetic transmission modes found in some types of transmission line. They are a subset of hybrid electromagnetic modes (HEM modes). HEM modes are those modes that have both an electric field and a magnetic field component longitudinally in the direction of travel of the propagating wave. Longitudinal-section modes, additionally, have a component of either magnetic or electric field that is zero in one transverse direction. In longitudinal-section electric (LSE) modes this field component is electric. In longitudinal-section magnetic (LSM) modes the zero field component is magnetic. Hybrid modes are to be compared to transverse modes which have, at most, only one component of either electric or magnetic field in the longitudinal direction.
Derivation and notation
There is an analogy between the way transverse modes (TE and TM modes) are arrived at and the definition of longitudinal section modes (LSE and LSM modes). When determining whether a structure can support a particular TE mode, one sets the electric field in the direction (the longitudinal direction of the line) to zero and then solves Maxwell's equations for the boundary conditions set by the physical structure of the line. One can just as easily set the electric field in the direction to zero and ask what modes that gives rise to. Such modes are designated LSE{x} modes. Similarly there can be LSE{y} modes and, analogously for the magnetic field, LSM{x} and LSM{y} modes. When dealing with longitudinal-section modes, the TE and TM modes are sometimes written as LSE{z} and LSM{z} respectively to produce a consistent set of notations and to reflect the analogous way in which they are defined.
Both LSE and LSM modes are a linear superposition of the corresponding TE and TM modes (that is, the modes with the same suffix numbers). Thus, in general, the LSE and LSM modes have a longitudinal component of both electric and magnetic
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In an electromagnetic wave, what do the crests and troughs represent?
A. vibrating fields
B. particles fields
C. oscillating fields
D. ocean waves
Answer:
|
|
sciq-8723
|
multiple_choice
|
Obligate anaerobes live and grow in the absence of what?
|
[
"molecular carbon",
"molecular nitrogen",
"molecular oxygen",
"atomic oxygen"
] |
C
|
Relavent Documents:
Document 0:::
Anabaena variabilis is a species of filamentous cyanobacterium. This species of the genus Anabaena and the domain Eubacteria is capable of photosynthesis. This species is heterotrophic, meaning that it may grow without light in the presence of fructose. It also can convert atmospheric dinitrogen to ammonia via nitrogen fixation.
Anabaena variabilis is a phylogenic-cousin of the more well-known species Nostoc spirrilum. Both of these species along with many other cyanobacteria are known to form symbiotic relationships with plants. Other cyanobacteria are known to form symbiotic relationships with diatoms, though no such relationship has been observed with Anabaena variabilis.
Anabaena variabilis is also a model organism for studying the beginnings of multicellular life due to its filamentous characterization and cellular-differentiation capabilities.
Document 1:::
Aerotolerant anaerobes use fermentation to produce ATP. They do not use oxygen, but they can protect themselves from reactive oxygen molecules. In contrast, obligate anaerobes can be harmed by reactive oxygen molecules.
There are three categories of anaerobes. Where obligate aerobes require oxygen to grow, obligate anaerobes are damaged by oxygen, aerotolerant organisms cannot use oxygen but tolerate its presence, and facultative anaerobes use oxygen if it is present but can grow without it.
Most aerotolerant anaerobes have superoxide dismutase and (non-catalase) peroxidase but don't have catalase. More specifically, they may use a NADH oxidase/NADH peroxidase (NOX/NPR) system or a glutathione peroxidase system. An example of an aerotolerant anaerobe is Cutibacterium acnes.
Document 2:::
An aerobic organism or aerobe is an organism that can survive and grow in an oxygenated environment. The ability to exhibit aerobic respiration may yield benefits to the aerobic organism, as aerobic respiration yields more energy than anaerobic respiration. Energy production of the cell involves the synthesis of ATP by an enzyme called ATP synthase. In aerobic respiration, ATP synthase is coupled with an electron transport chain in which oxygen acts as a terminal electron acceptor. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found.
Types
Obligate aerobes need oxygen to grow. In a process known as cellular respiration, these organisms use oxygen to oxidize substrates (for example sugars and fats) and generate energy.
Facultative anaerobes use oxygen if it is available, but also have anaerobic methods of energy production.
Microaerophiles require oxygen for energy production, but are harmed by atmospheric concentrations of oxygen (21% O2).
Aerotolerant anaerobes do not use oxygen but are not harmed by it.
When an organism is able to survive in both oxygen and anaerobic environments, the use of the Pasteur effect can distinguish between facultative anaerobes and aerotolerant organisms. If the organism is using fermentation in an anaerobic environment, the addition of oxygen will cause facultative anaerobes to suspend fermentation and begin using oxygen for respiration. Aerotolerant organisms must continue fermentation in the presence of oxygen.
Facultative organisms grow in both oxygen rich media and oxygen free media.
Aerobic Respiration
Aerobic organisms use a process called aerobic respiration to create ATP from ADP and a phosphate. Glucose (a monosaccharide) is oxidized to power the
Document 3:::
In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not.
Overview
Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom):
+ H2O + light → CH2O + O2
+ O2 + 4 H2S → CH2O + 4 S + 3 H2O
In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth'
Document 4:::
An anaerobic organism or anaerobe is any organism that does not require molecular oxygen for growth. It may react negatively or even die if free oxygen is present. In contrast, an aerobic organism (aerobe) is an organism that requires an oxygenated environment. Anaerobes may be unicellular (e.g. protozoans, bacteria) or multicellular.
Most fungi are obligate aerobes, requiring oxygen to survive. However, some species, such as the Chytridiomycota that reside in the rumen of cattle, are obligate anaerobes; for these species, anaerobic respiration is used because oxygen will disrupt their metabolism or kill them. Deep waters of the ocean are a common anoxic environment.
First recorded observation
In his 14 June 1680 letter to The Royal Society, Antonie van Leeuwenhoek described an experiment he carried out by filling two identical glass tubes about halfway with crushed pepper powder, to which some clean rain water was added. Van Leeuwenhoek sealed one of the glass tubes using a flame and left the other glass tube open. Several days later, he discovered in the open glass tube 'a great many very little animalcules, of divers sort having its own particular motion.' Not expecting to see any life in the sealed glass tube, Van Leeuwenhoek saw to his surprise 'a kind of living animalcules that were round and bigger than the biggest sort that I have said were in the other water.' The conditions in the sealed tube had become quite anaerobic due to consumption of oxygen by aerobic microorganisms.
In 1913, Martinus Beijerinck repeated Van Leeuwenhoek's experiment and identified Clostridium butyricum as a prominent anaerobic bacterium in the sealed pepper infusion tube liquid. Beijerinck commented:
Classifications
For practical purposes, there are three categories of anaerobe:
Obligate anaerobes, which are harmed by the presence of oxygen. Two examples of obligate anaerobes are Clostridium botulinum and the bacteria which live near hydrothermal vents on the deep-sea ocean flo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Obligate anaerobes live and grow in the absence of what?
A. molecular carbon
B. molecular nitrogen
C. molecular oxygen
D. atomic oxygen
Answer:
|
|
sciq-3635
|
multiple_choice
|
How many millions of years ago did pangaea begin breaking apart?
|
[
"500",
"600",
"250",
"400"
] |
C
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many millions of years ago did pangaea begin breaking apart?
A. 500
B. 600
C. 250
D. 400
Answer:
|
|
sciq-877
|
multiple_choice
|
When does a baby double in length and triple in weight?
|
[
"terrible twos",
"infancy",
"fetal stage",
"pre-adolescence"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
Document 4:::
Monochorionic twins are monozygotic (identical) twins that share the same placenta. If the placenta is shared by more than two twins (see multiple birth), these are monochorionic multiples. Monochorionic twins occur in 0.3% of all pregnancies. Seventy-five percent of monozygotic twin pregnancies are monochorionic; the remaining 25% are dichorionic diamniotic. If the placenta divides, this takes place before the third day after fertilization.
Amniocity and zygosity
Monochorionic twins generally have two amniotic sacs (called Monochorionic-Diamniotic "MoDi"), but sometimes, in the case of monoamniotic twins (Monochorionic-Monoamniotic "MoMo"), they also share the same amniotic sac. Monoamniotic twins occur when the split takes place after the ninth day after fertilization. Monoamniotic twins are always monozygotic (identical twins). Monochorionic-Diamniotic twins are almost always monozygotic, with a few exceptions where the blastocysts have fused.
Diagnosis
By performing an obstetric ultrasound at a gestational age of 10–14 weeks, monochorionic-diamniotic twins are discerned from dichorionic twins. The presence of a "T-sign" at the inter-twin membrane-placental junction is indicative of monochorionic-diamniotic twins (that is, the junction between the inter-twin membrane and the external rim forms a right angle), whereas dichorionic twins present with a "lambda (λ) sign" (that is, the chorion forms a wedge-shaped protrusion into the inter-twin space, creating a rather curved junction). The "lambda sign" is also called the "twin peak sign". At ultrasound at a gestational age of 16–20 weeks, the "lambda sign" is indicative of dichorionicity but its absence does not exclude it.
In contrast, the placentas may be overlapping for dichorionic twins, making it hard to distinguish them, making it difficult to discern mono- or dichorionic twins on solely the appearance of the placentas on ultrasound.
Complications
In addition to a shared placenta, monochorionic twins
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When does a baby double in length and triple in weight?
A. terrible twos
B. infancy
C. fetal stage
D. pre-adolescence
Answer:
|
|
sciq-11164
|
multiple_choice
|
What structure is the site of all of the basic biochemical processes that keep organisms alive?
|
[
"particle",
"cell",
"Atom",
"Element"
] |
B
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 2:::
The following outline is provided as an overview of and topical guide to biophysics:
Biophysics – interdisciplinary science that uses the methods of physics to study biological systems.
Nature of biophysics
Biophysics is
An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.
A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force.
An interdisciplinary field – field of science that overlaps with other sciences
Scope of biophysics research
Biomolecular scale
Biomolecule
Biomolecular structure
Organismal scale
Animal locomotion
Biomechanics
Biomineralization
Motility
Environmental scale
Biophysical environment
Biophysics research overlaps with
Agrophysics
Biochemistry
Biophysical chemistry
Bioengineering
Biogeophysics
Nanotechnology
Systems biology
Branches of biophysics
Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general.
Medical biophysics – interdisciplinary field that applies me
Document 3:::
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Education
Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti
Document 4:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What structure is the site of all of the basic biochemical processes that keep organisms alive?
A. particle
B. cell
C. Atom
D. Element
Answer:
|
|
sciq-7036
|
multiple_choice
|
Where is the cell wall located?
|
[
"outside cell membrane",
"in cell membrane",
"in the mitochondria",
"in the chloroplast"
] |
A
|
Relavent Documents:
Document 0:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 1:::
The territorial matrix is the tissue surrounding chondrocytes (cells which produce cartilage) in cartilage. Chondrocytes are inactive cartilage cells, so they don't make cartilage components. The territorial matrix is basophilic (attracts basic compounds and dyes due to its anionic/acidic nature), because there is a higher concentration of proteoglycans, so it will color darker when it's colored and viewed under a microscope. In other words, it stains metachromatically (dyes change color upon binding) due to the presence of proteoglycans (compound molecules composed of proteins and sugars).
Document 2:::
This table lists the epithelia of different organs of the human body
Human anatomy
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
A laminar organization describes the way certain tissues, such as bone membrane, skin, or brain tissues, are arranged in layers.
Types
Embryo
The earliest forms of laminar organization are shown in the diploblastic and triploblastic formation of the germ layers in the embryo. In the first week of human embryogenesis two layers of cells have formed, an external epiblast layer (the primitive ectoderm), and an internal hypoblast layer (primitive endoderm). This gives the early bilaminar disc. In the third week in the stage of gastrulation epiblast cells invaginate to form endoderm, and a third layer of cells known as mesoderm. Cells that remain in the epiblast become ectoderm. This is the trilaminar disc and the epiblast cells have given rise to the three germ layers.
Brain
In the brain a laminar organization is evident in the arrangement of the three meninges, the membranes that cover the brain and spinal cord. These membranes are the dura mater, arachnoid mater, and pia mater. The dura mater has two layers a periosteal layer near to the bone of the skull, and a meningeal layer next to the other meninges.
The cerebral cortex, the outer neural sheet covering the cerebral hemispheres can be described by its laminar organization, due to the arrangement of cortical neurons into six distinct layers.
Eye
The eye in mammals has an extensive laminar organization. There are three main layers – the outer fibrous tunic, the middle uvea, and the inner retina. These layers have sublayers with the retina having ten ranging from the outer choroid to the inner vitreous humor and including the retinal nerve fiber layer.
Skin
The human skin has a dense laminar organization. The outer epidermis has four or five layers.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where is the cell wall located?
A. outside cell membrane
B. in cell membrane
C. in the mitochondria
D. in the chloroplast
Answer:
|
|
sciq-3037
|
multiple_choice
|
Which carbohydrate is produced by photosynthesis?
|
[
"sugar",
"protein",
"insulin",
"glucose"
] |
D
|
Relavent Documents:
Document 0:::
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779.
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water.
Origin
Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w
Document 1:::
The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction
6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2
where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence.
Typical efficiencies
Plants
Quoted values sunlight-to-biomass efficien
Document 2:::
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha
Document 3:::
Interactive pathway map
|
An intermediate in photosynthesis
During plant photosynthesis, 2 equivalents of glycerate 3-phosphate (GP; also known as 3-phosphoglycerate) are produced by the first step of the light-independent reactions when ribulose 1,5-bisphosphate (RuBP) and carbon dioxide are catalysed by the rubisco enzyme. The GP is converted to D-glyceraldehyde 3-phosphate (G3P) using the energy in ATP and the reducing power of NADPH as part of the Calvin cycle. This returns ADP, phosphate ions Pi, and NADP+ to the light-dependent reactions of photosynthesis for their continued function.
RuBP is regenerated for the Calvin cycle to continue.
G3P is generally considered the prime end-product of photosynthesis and it can be used as an immediate food nutrient, combined and rearranged to form monosaccharide sugars, such as
Document 4:::
Primary nutritional groups are groups of organisms, divided in relation to the nutrition mode according to the sources of energy and carbon, needed for living, growth and reproduction. The sources of energy can be light or chemical compounds; the sources of carbon can be of organic or inorganic origin.
The terms aerobic respiration, anaerobic respiration and fermentation (substrate-level phosphorylation) do not refer to primary nutritional groups, but simply reflect the different use of possible electron acceptors in particular organisms, such as O2 in aerobic respiration, or nitrate (), sulfate () or fumarate in anaerobic respiration, or various metabolic intermediates in fermentation.
Primary sources of energy
Phototrophs absorb light in photoreceptors and transform it into chemical energy.
Chemotrophs release chemical energy.
The freed energy is stored as potential energy in ATP, carbohydrates, or proteins. Eventually, the energy is used for life processes such as moving, growth and reproduction.
Plants and some bacteria can alternate between phototrophy and chemotrophy, depending on the availability of light.
Primary sources of reducing equivalents
Organotrophs use organic compounds as electron/hydrogen donors.
Lithotrophs use inorganic compounds as electron/hydrogen donors.
The electrons or hydrogen atoms from reducing equivalents (electron donors) are needed by both phototrophs and chemotrophs in reduction-oxidation reactions that transfer energy in the anabolic processes of ATP synthesis (in heterotrophs) or biosynthesis (in autotrophs). The electron or hydrogen donors are taken up from the environment.
Organotrophic organisms are often also heterotrophic, using organic compounds as sources of both electrons and carbon. Similarly, lithotrophic organisms are often also autotrophic, using inorganic sources of electrons and CO2 as their inorganic carbon source.
Some lithotrophic bacteria can utilize diverse sources of electrons, depending on the avail
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which carbohydrate is produced by photosynthesis?
A. sugar
B. protein
C. insulin
D. glucose
Answer:
|
|
sciq-9352
|
multiple_choice
|
Nonvascular plants lack vascular tissue and what?
|
[
"seeds",
"cells",
"chlorophyll",
"cytoplasm"
] |
A
|
Relavent Documents:
Document 0:::
In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues.
Biological organisms follow this hierarchy:
Cells < Tissue < Organ < Organ System < Organism
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Plant tissue
In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue.
Epidermis – Cells forming the outer surface of the leaves and of the young plant body.
Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally.
Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients.
Plant tissues can also be divided differently into two types:
Meristematic tissues
Permanent tissues.
Meristematic tissue
Meristematic tissue consists of actively dividing cell
Document 1:::
Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones.
Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific.
Characteristics
Botanists define vascular plants by three primary characteristics:
Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes.
In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with
Document 2:::
Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant.
The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well.
Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium g
Document 3:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 4:::
In botany, epiblem is a tissue that replaces the epidermis in most roots and in stems of submerged aquatic plants. It is usually located between the epidermis and cortex in the root or stem of a plant.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Nonvascular plants lack vascular tissue and what?
A. seeds
B. cells
C. chlorophyll
D. cytoplasm
Answer:
|
|
ai2_arc-1051
|
multiple_choice
|
A simple food chain includes hawks, lizards, and insects. Which will most likely happen to the lizard and hawk populations if a pesticide is sprayed to kill the insects, and the lizard and hawk populations cannot find other food in this ecosystem?
|
[
"Both the lizard population and the hawk population will increase.",
"Both the lizard population and the hawk population will decrease.",
"The lizard population will increase, but the hawk population will decrease.",
"The lizard population will decrease, but the hawk population will increase."
] |
B
|
Relavent Documents:
Document 0:::
In nature and human societies, many phenomena have causal relationships where one phenomenon A (a cause) impacts another phenomenon B (an effect). Establishing causal relationships is the aim of many scientific studies across fields ranging from biology and physics to social sciences and economics. It is also a subject of accident analysis, and can be considered a prerequisite for effective policy making.
To describe causal relationships between phenomena, non-quantitative visual notations are common, such as arrows, e.g. in the nitrogen cycle or many chemistry and mathematics textbooks. Mathematical conventions are also used, such as plotting an independent variable on a horizontal axis and a dependent variable on a vertical axis, or the notation to denote that a quantity "" is a dependent variable which is a function of an independent variable "". Causal relationships are also described using quantitative mathematical expressions.
The following examples illustrate various types of causal relationships. These are followed by different notations used to represent causal relationships.
Examples
What follows does not necessarily assume the convention whereby denotes an independent variable, and
denotes a function of the independent variable . Instead, and denote two quantities with an a priori unknown causal relationship, which can be related by a mathematical expression.
Ecosystem example: correlation without causation
Imagine the number of days of weather below zero degrees Celsius, , causes ice to form on a lake, , and it causes bears to go into hibernation . Even though does not cause and vice-versa, one can write an equation relating and . This equation may be used to successfully calculate the number of hibernating bears , given the surface area of the lake covered by ice. However, melting the ice in a region of the lake by pouring salt onto it, will not cause bears to come out of hibernation. Nor will waking the bears by physically disturbing the
Document 1:::
The paradox of the pesticides is a paradox that states that applying pesticide to a pest may end up increasing the abundance of the pest if the pesticide upsets natural predator–prey dynamics in the ecosystem.
Lotka–Volterra equation
To describe the paradox of the pesticides mathematically, the Lotka–Volterra equation, a set of first-order, nonlinear, differential equations, which are frequently used to describe predator–prey interactions, can be modified to account for the additions of pesticides into the predator–prey interactions.
Without pesticides
The variables represent the following:
The following two equations are the original Lotka–Volterra equation, which describe the rate of change of each respective population as a function of the population of the other organism:
By setting each equation to zero and thus assuming a stable population, a graph of two lines (isoclines) can be made to find the equilibrium point, the point at which both interacting populations are stable.
These are the isoclines for the two above equations:
Accounting for pesticides
Now, to account for the difference in the population dynamics of the predator and prey that occurs with the addition of pesticides, variable q is added to represent the per capita rate at which both species are killed by the pesticide. The original Lotka–Volterra equations change to be as follows:
Solving the isoclines as was done above, the following equations represent the two lines with the intersection that represents the new equilibrium point. These are the new isoclines for the populations:
As one can see from the new isoclines, the new equilibrium will have a higher H value and a lower P value so the number of prey will increase while the number of predator decreases. Thus, prey, which is normally the targeted by the pesticide, is actually being benefited instead of harmed by the pesticide.
Empirical evidence
The paradox has been documented repeatedly throughout the history of pe
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
The numerical response in ecology is the change in predator density as a function of change in prey density. The term numerical response was coined by M. E. Solomon in 1949. It is associated with the functional response, which is the change in predator's rate of prey consumption with change in prey density. As Holling notes, total predation can be expressed as a combination of functional and numerical response. The numerical response has two mechanisms: the demographic response and the aggregational response. The numerical response is not necessarily proportional to the change in prey density, usually resulting in a time lag between prey and predator populations. For example, there is often a scarcity of predators when the prey population is increasing.
Demographic response
The demographic response consists of changes in the rates of predator reproduction or survival due to a changes in prey density. The increase in prey availability translates into higher energy intake and reduced energy output. This is different from an increase in energy intake due to increased foraging efficiency, which is considered a functional response. This concept can be articulated in the Lotka-Volterra Predator-Prey Model.
a = conversion efficiency: the fraction of prey energy assimilated by the predator and turned into new predators
P = predator density
V = prey density
m = predator mortality
c = capture rate
Demographic response consists of a change in dP/dt due to a change in V and/or m. For example, if V increases, then predator growth rate (dP/dt) will increase. Likewise if the energy intake increases (due to greater food availability) and a decrease in energy output (from foraging), then predator mortality (m) will decrease and predator growth rate (dP/dt) will increase. In contrast, the functional response consists of a change in conversion efficiency (a) or capture rate (c).
The relationship between available energy and reproductive efforts can be explained with the life his
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A simple food chain includes hawks, lizards, and insects. Which will most likely happen to the lizard and hawk populations if a pesticide is sprayed to kill the insects, and the lizard and hawk populations cannot find other food in this ecosystem?
A. Both the lizard population and the hawk population will increase.
B. Both the lizard population and the hawk population will decrease.
C. The lizard population will increase, but the hawk population will decrease.
D. The lizard population will decrease, but the hawk population will increase.
Answer:
|
|
sciq-2275
|
multiple_choice
|
G2 and s are phases in what process that is important in cell division?
|
[
"osmosis",
"cytokinesis",
"mitosis",
"tissues"
] |
C
|
Relavent Documents:
Document 0:::
Cell proliferation is the process by which a cell grows and divides to produce two daughter cells. Cell proliferation leads to an exponential increase in cell number and is therefore a rapid mechanism of tissue growth. Cell proliferation requires both cell growth and cell division to occur at the same time, such that the average size of cells remains constant in the population. Cell division can occur without cell growth, producing many progressively smaller cells (as in cleavage of the zygote), while cell growth can occur without cell division to produce a single larger cell (as in growth of neurons). Thus, cell proliferation is not synonymous with either cell growth or cell division, despite these terms sometimes being used interchangeably.
Stem cells undergo cell proliferation to produce proliferating "transit amplifying" daughter cells that later differentiate to construct tissues during normal development and tissue growth, during tissue regeneration after damage, or in cancer.
The total number of cells in a population is determined by the rate of cell proliferation minus the rate of cell death.
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells. Cell proliferation occurs by combining cell growth with regular "G1-S-M-G2" cell cycles to produce many diploid cell progeny.
In single-celled organisms, cell proliferation is largely responsive to the availability of nutrients in the environment (or laboratory growth medium).
In multicellular organisms, the process of cell proliferation is tightly controlled by gene regulatory networks encoded in the genome and executed mainly
Document 1:::
In cell biology, the cleavage furrow is the indentation of the cell's surface that begins the progression of cleavage, by which animal and some algal cells undergo cytokinesis, the final splitting of the membrane, in the process of cell division. The same proteins responsible for muscle contraction, actin and myosin, begin the process of forming the cleavage furrow, creating an actomyosin ring. Other cytoskeletal proteins and actin binding proteins are involved in the procedure.
Mechanism
Plant cells do not perform cytokinesis through this exact method but the two procedures are not totally different. Animal cells form an actin-myosin contractile ring within the equatorial region of the cell membrane that constricts to form the cleavage furrow. In plant cells, Golgi vesicle secretions form a cell plate or septum on the equatorial plane of the cell wall by the action of microtubules of the phragmoplast. The cleavage furrow in animal cells and the phragmoplast in plant cells are complex structures made up of microtubules and microfilaments that aide in the final separation of the cells into two identical daughter cells.
Cell cycle
The cell cycle begins with interphase when the DNA replicates, the cell grows and prepares to enter mitosis. Mitosis includes four phases: prophase, metaphase, anaphase, and telophase. Prophase is the initial phase when spindle fibers appear that function to move the chromosomes toward opposite poles. This spindle apparatus consists of microtubules, microfilaments and a complex network of various proteins. During metaphase, the chromosomes line up using the spindle apparatus in the middle of the cell along the equatorial plate. The chromosomes move to opposite poles during anaphase and remain attached to the spindle fibers by their centromeres. Animal cell cleavage furrow formation is caused by a ring of actin microfilaments called the contractile ring, which forms during early anaphase. Myosin is present in the region of the contracti
Document 2:::
Cell growth refers to an increase in the total mass of a cell, including both cytoplasmic, nuclear and organelle volume. Cell growth occurs when the overall rate of cellular biosynthesis (production of biomolecules or anabolism) is greater than the overall rate of cellular degradation (the destruction of biomolecules via the proteasome, lysosome or autophagy, or catabolism).
Cell growth is not to be confused with cell division or the cell cycle, which are distinct processes that can occur alongside cell growth during the process of cell proliferation, where a cell, known as the mother cell, grows and divides to produce two daughter cells. Importantly, cell growth and cell division can also occur independently of one another. During early embryonic development (cleavage of the zygote to form a morula and blastoderm), cell divisions occur repeatedly without cell growth. Conversely, some cells can grow without cell division or without any progression of the cell cycle, such as growth of neurons during axonal pathfinding in nervous system development.
In multicellular organisms, tissue growth rarely occurs solely through cell growth without cell division, but most often occurs through cell proliferation. This is because a single cell with only one copy of the genome in the cell nucleus can perform biosynthesis and thus undergo cell growth at only half the rate of two cells. Hence, two cells grow (accumulate mass) at twice the rate of a single cell, and four cells grow at 4-times the rate of a single cell. This principle leads to an exponential increase of tissue growth rate (mass accumulation) during cell proliferation, owing to the exponential increase in cell number.
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves bala
Document 3:::
Cell synchronization is a process by which cells in a culture at different stages of the cell cycle are brought to the same phase. Cell synchrony is a vital process in the study of cells progressing through the cell cycle as it allows population-wide data to be collected rather than relying solely on single-cell experiments. The types of synchronization are broadly categorized into two groups; physical fractionization and chemical blockade.
Physical Separation
Physical fractionation is a process by which continuously dividing cells are separated into phase-enriched populations based on characteristics such as the following:
Cell density
Cell size
The presence of cell surface epitopes marked by antibodies
Light scatter
Fluorescent emission by labeled cells.
Given that cells take on varying morphologies and surface markers throughout the cell cycle, these traits can be used to separate by phase. There are two commonly used methods.
Centrifugal Elutriation
(Previously called: counter streaming centrifugation) Centrifugal elutriation can be used to separate cells in different phases of the cell cycle based on their size and sedimentation velocity (related to sedimentation coefficient). Because of the consistent growth patterns throughout the cell cycle, centrifugal elutriation can separate cells into G1, S, G2, and M phases by increasing size (and increasing sedimentation coefficients) with diminished resolution between G2 and M phases due to cellular heterogeneity and lack of a distinct size change.
Larger cells sediment faster, so a cell in G2, which has experienced more growth time, will sediment faster than a cell in G1 and can therefore be fractionated out. Cells grown in suspension tend to be easier to elutriate given that they do not adhere to one another and have rounded, uniform shapes. However, some types of adherent cells can be treated with trypsin and resuspended for elutriation as they will assume a more rounded shape in suspension.
Flow Cytometry a
Document 4:::
Mitotic index is defined as the ratio between the number of a population's cells undergoing mitosis to its total number of cells.
Purpose
The mitotic index is a measure of cellular proliferation.
It is defined as the percentage of cells undergoing mitosis in a given population of cells. Mitosis is the division of somatic cells into two daughter cells. Durations of the cell cycle and mitosis vary in different cell types. An elevated mitotic index indicates more cells are dividing. In cancer cells, the mitotic index may be elevated compared to normal growth of tissues or cellular repair of the site of an injury. The mitotic index is therefore an important prognostic factor predicting both overall survival and response to chemotherapy in most types of cancer. It may lose much of its predictive value for elderly populations. For example, a low mitotic index loses any prognostic value for women over 70 years old with breast cancer.
Calculation
The mitotic index is the number of cells undergoing mitosis divided by the total number of cells.
A typical figure of mitotic index includes statements like "10 mitotic figures are noted per 10 high power fields" followed by "4 mitotic figures noted per 50 high power fields."
Formula
where (P+M+A+T) is the sum of all cells in phase as prophase, metaphase, anaphase and telophase, respectively and N is total number of cells.
Examples
The fastest rate of mitosis happens in the zygote, embryo and infant stage for humans and animals because mitosis is essential for embryological development. Mitosis is also required at a higher rate to grow and repair tissue. Some examples include human lymph nodes and bone marrow. Also, skin, hair, and the cells lining the intestines (epithelial cells) have high rates of mitosis. That's because those tissues constantly need to be repaired (by the cells being replaced) or growing. Plants have higher rates of mitosis at the cells of the shoot and root tips.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
G2 and s are phases in what process that is important in cell division?
A. osmosis
B. cytokinesis
C. mitosis
D. tissues
Answer:
|
|
sciq-10917
|
multiple_choice
|
Bacterial contamination of foods can lead to digestive problems called what?
|
[
"cancer",
"food poisoning",
"the flu",
"butterflies in your stomach"
] |
B
|
Relavent Documents:
Document 0:::
Gastrointestinal pathology is the subspecialty of surgical pathology which deals with the diagnosis and characterization of neoplastic and non-neoplastic diseases of the digestive tract and accessory organs, such as the pancreas and liver.
Sub-specialty recognition and Board Certification
Gastrointestinal pathology (including liver, gallbladder and pancreas) is a recognized sub-specialty discipline of surgical pathology. Recognition of a sub-specialty is generally related to dedicated fellowship training offered within the subspecialty or, alternatively, to surgical pathologists with a special interest and extensive experience in gastrointestinal pathology. There are approximately 30 gastrointestinal pathology fellowships offered within the United States (predominantly academic, and more recently three "corporate" fellowships). This translates to fewer than 40 fellowship trained gastrointestinal pathologists being trained annually in the United States each year.
Fellowship in gastrointestinal pathology involves:
diagnostic evaluation of surgical (whole organ) and biopsy pathology of gastrointestinal tissue, [with the exception of at least one corporate fellowship]
consistent interaction with clinical colleagues (gastroenterologists, colorectal surgeons and gastrointestinal radiologists) to ensure understanding of the clinical aspects of gastrointestinal disease, treatment modalities and other diagnostic findings;
research in gastrointestinal physiology, disease mechanisms and histomorphology
education of general pathologists and clinical colleagues.
During the course of a one-year gastrointestinal pathology fellowship, the GI-liver pathology fellow will review between 8,000 and 15,000 gastrointestinal and liver biopsy and surgical specimens with all clinical history, laboratory data and frequently, knowledge of response to treatment. This volume of cases is similar to approximately five years of case experience for general surgical pathologists in pri
Document 1:::
Bacteriology is the branch and specialty of biology that studies the morphology, ecology, genetics and biochemistry of bacteria as well as many other aspects related to them. This subdivision of microbiology involves the identification, classification, and characterization of bacterial species. Because of the similarity of thinking and working with microorganisms other than bacteria, such as protozoa, fungi, and viruses, there has been a tendency for the field of bacteriology to extend as microbiology. The terms were formerly often used interchangeably. However, bacteriology can be classified as a distinct science.
Overview
Definition
Bacteriology is the study of bacteria and their relation to medicine. Bacteriology evolved from physicians needing to apply the germ theory to address the concerns relating to disease spreading in hospitals the 19th century. Identification and characterizing of bacteria being associated to diseases led to advances in pathogenic bacteriology. Koch's postulates played a role into identifying the relationships between bacteria and specific diseases. Since then, bacteriology has played a role in successful advances in science such as bacterial vaccines like diphtheria toxoid and tetanus toxoid. Bacteriology can be studied and applied in many sub-fields relating to agriculture, marine biology, water pollution, bacterial genetics, veterinary medicine, biotechnology and others.
Bacteriologists
A bacteriologist is a microbiologist or other trained professional in bacteriology. Bacteriologists are interested in studying and learning about bacteria, as well as using their skills in clinical settings. This includes investigating properties of bacteria such as morphology, ecology, genetics and biochemistry, phylogenetics, genomics and many other areas related to bacteria like disease diagnostic testing. They can also work as medical scientists, veterinary scientists, or diagnostic technicians in locations like clinics, blood banks, hospitals
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
Almost 2,000 people, mostly schoolchildren from the Caraga region of the Philippines, experienced food poisoning after consuming durian, mangosteen, and mango flavored candies in 2015. The Food and Drug Administration of the Philippines confirmed that the sweets were contaminated by staphylococcus bacteria, a bacteria commonly found on human skin. The cause was suspected to be accidental bacterial contamination by vendors, who had repackaged the candy.
Victims
Most of the victims of the food poisoning incident were schoolchildren within the Caraga Region. Victims reported of experiencing symptoms such as diarrhea, dizziness, and stomachache. The cases were reported by at least nine health facilities based in Surigao del Sur, Surigao del Norte and Agusan del Sur. At least 10 people were hospitalized.
The first cases were reported in Cagwait, Surigao del Sur in the morning of July 10.
Food poisoning symptoms were reported in the following towns:
Surigao del Sur
Carrascal
Cagwait
Cortes, Surigao del Sur
Lianga
San Agustin
Madrid
Marihatag
Tago
Tandag
Surigao del Norte
Placer
Surigao City
Agusan del Sur
Bayugan
Response
Acting Mayor Paolo Duterte of Davao City ordered an urgent investigation on July 10 regarding the matter to determine the exact cause of the candy contamination incident.
On July 11, 2015, the Department of Health in the Caraga declared a food poisoning outbreak in the region. Hospitals across the Caraga Region were put into white alert in response to the incident.
Investigation
The Food and Drug Administration (FDA) conducted microbiological tests on the samples of the contaminated candies. The FDA had suspected that the candies were contaminated by E. coli, Salmonella or staphylococcus based on the reported symptoms by victims of the food poisoning incident. They announced that the candy samples tested positive for staphylococcus aureus.
The FDA traced the contaminated candies' origin to two manufacturing facilities in Davao City
Document 4:::
A microbiologist (from Greek ) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.
For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also not include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual proper
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Bacterial contamination of foods can lead to digestive problems called what?
A. cancer
B. food poisoning
C. the flu
D. butterflies in your stomach
Answer:
|
|
sciq-3789
|
multiple_choice
|
Scientists think that stars and galaxies make up only a small part of the matter in the universe. what is the rest of the matter called?
|
[
"typical matter",
"light matter",
"dark matter",
"cold matter"
] |
C
|
Relavent Documents:
Document 0:::
Cosmic dustalso called extraterrestrial dust, space dust, or star dustis dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and , such as micrometeoroids. Larger particles are called meteoroids. Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement.
In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3.
Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromatic–aliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars.
Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006.
Study and importance
Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System,
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Hot dark matter (HDM) is a theoretical form of dark matter which consists of particles that travel with ultrarelativistic velocities.
Dark matter is a form of matter that neither emits nor absorbs light. Within physics, this behavior is characterized by dark matter not interacting with electromagnetic radiation, hence making it dark and rendering it undetectable via conventional instruments in physics. Data from galaxy rotation curves indicate that approximately 80% of the mass of a galaxy cannot be seen, forcing researchers to innovate ways that indirectly detect it through dark matter's effects on gravitational fluctuations. As we shall see below, it is useful to differentiate dark matter into "hot" (HDM) and "cold" (CDM) types–some even suggesting a middle-ground of "warm" dark matter (WDM). The terminology refers to the mass of the dark matter particles (which dictates the speed at which they travel): HDM travels faster than CDM because the HDM particles are theorized to be of lower mass.
Role in galaxy formation
In terms of its application, the distribution of hot dark matter could also help explain how clusters and superclusters of galaxies formed after the Big Bang. Theorists claim that there exist two classes of dark matter: 1) those that "congregate around individual members of a cluster of visible galaxies" and 2) those that encompass "the clusters as a whole." Because cold dark matter possesses a lower velocity, it could be the source of "smaller, galaxy-sized lumps," as shown in the image. Hot dark matter, then, should correspond to the formation of larger mass aggregates that surround whole galaxy clusters. However, data from the cosmic microwave background radiation, as measured by the COBE satellite, is highly uniform, and such high-velocity hot dark matter particles cannot form clumps as small as galaxies beginning from such a smooth initial state, highlighting a discrepancy in what dark matter theory and the actual data are saying. Theoretically
Document 4:::
Particle chauvinism is the term used by British astrophysicist Martin Rees to describe the (allegedly erroneous) assumption that what we think of as normal matter – atoms, quarks, electrons, etc. (excluding dark matter or other matter) – is the basis of matter in the universe, rather than a rare phenomenon.
Dominance of dark matter
With the growing recognition in the late 20th century of the presence of dark matter in the universe, ordinary baryonic matter has come to be seen as something of a cosmic afterthought. As John D. Barrow put it, "This would be the final Copernican twist in our status in the material universe. Not only are we not at the center of the universe: we are not even made of the predominant form of matter."
The 21st century saw the share of baryonic matter in the total energy of the universe downgraded further, to perhaps as low as 1%, further extending what has been called the demise of particle chauvinism, before being revised up to some 5% of the contents of the universe.
See also
Anthropic principle
Carbon chauvinism
Mediocrity principle
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Scientists think that stars and galaxies make up only a small part of the matter in the universe. what is the rest of the matter called?
A. typical matter
B. light matter
C. dark matter
D. cold matter
Answer:
|
|
sciq-5774
|
multiple_choice
|
A system of glands secretes what chemical messenger molecules into the blood?
|
[
"metabolytes",
"enzymes",
"acids",
"hormones"
] |
D
|
Relavent Documents:
Document 0:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 1:::
Heterocrine glands (or composite glands) are the glands which function as both exocrine gland and endocrine gland. These glands exhibit a unique and diverse secretory function encompassing the release of proteins and non-proteinaceous compounds, endocrine and exocrine secretions into both the bloodstream and ducts respectively, thereby bridging the realms of internal and external communication within the body. This duality allows them to serve crucial roles in regulating various physiological processes and maintaining homeostasis. These include the gonads (testes and ovaries), pancreas and salivary glands.
Pancreas releases digestive enzymes into the small intestine via ducts (exocrine) and secretes insulin and glucagon into the bloodstream (endocrine) to regulate blood sugar level. Testes produce sperm, which is released through ducts (exocrine), and they also secrete testosterone into the bloodstream (endocrine). Similarly, ovaries release ova through ducts (exocrine) and produce estrogen and progesterone (endocrine). Salivary glands secrete saliva through ducts to aid in digestion (exocrine) and produce epidermal growth factor and insulin-like growth factor (endocrine).
Anatomy
Heterocrine glands typically have a complex structure that enables them to produce and release different types of secretions. The two primary components of these glands are:
Endocrine component: Heterocrine glands produce hormones, which are chemical messengers that travel through the bloodstream to target organs or tissues. These hormones play a vital role in regulating numerous physiological processes, such as metabolism, growth, and the immune response.
Exocrine component: In addition to their endocrine function, heterocrine glands secrete substances directly into ducts or cavities, which can be released through various body openings. These exocrine secretions can include enzymes, mucus, and other substances that aid in digestion, lubrication, or protection.
Characteristics and Func
Document 2:::
Sudomotor function refers to the autonomic nervous system control of sweat gland activity in response to various environmental and individual factors. Sweat production is a vital thermoregulatory mechanism used by the body to prevent heat-related illness as the evaporation of sweat is the body’s most effective method of heat reduction and the only cooling method available when the air temperature rises above skin temperature. In addition, sweat plays key roles in grip, microbial defense, and wound healing.
Physiology
Human sweat glands are primarily classified as either eccrine or apocrine glands. Eccrine glands open directly onto the surface of the skin, while apocrine glands open into hair follicles. Eccrine glands are the predominant sweat gland in the human body with numbers totaling up to 4 million. They are located within the reticular dermal layer of the skin and distributed across nearly the entire surface of the body with the largest numbers occurring in the palms and soles.
Eccrine sweat is secreted in response to both emotional and thermal stimulation. Eccrine glands are primarily innervated by small-diameter, unmyelinated class C-fibers from postganglionic sympathetic cholinergic neurons. Increases in body and skin temperature are detected by visceral and peripheral thermoreceptors, which send signals via class C and Aδ-fiber afferent somatic neurons through the lateral spinothalamic tract to the preoptic nucleus of the hypothalamus for processing. In addition, there are warm-sensitive neurons located within the preoptic nucleus that detect increases in core body temperature. Efferent pathways then descend ipsilaterally from the hypothalamus through the pons and medulla to preganglionic sympathetic cholinergic neurons in the intermediolateral column of the spinal cord. The preganglionic neurons synapse with postganglionic cholinergic sudomotor (and to a lesser extent adrenergic) neurons in the paravertebral sympathetic ganglia. When the action potentia
Document 3:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 4:::
Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle.
They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance.
Function
Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin.
Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition.
Additional images
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A system of glands secretes what chemical messenger molecules into the blood?
A. metabolytes
B. enzymes
C. acids
D. hormones
Answer:
|
|
sciq-11371
|
multiple_choice
|
An aqueous solution is a homogeneous mixture in which the most abundant component is what?
|
[
"water",
"air",
"oxygen",
"blood"
] |
A
|
Relavent Documents:
Document 0:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 1:::
Semper rehydration solution is a mixture used for the management of dehydration. Each liter of Semper rehydration solution contains 189 mmol glucose, 40 mmol Na+, 35 mmol Cl−, 20 mmol K+ and 25 mmol HCO3−.
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 4:::
Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other.
Physical properties
Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue.
Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseou
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An aqueous solution is a homogeneous mixture in which the most abundant component is what?
A. water
B. air
C. oxygen
D. blood
Answer:
|
|
sciq-10210
|
multiple_choice
|
What happens when waves reach the shore?
|
[
"diffuse and recede",
"surge and drown",
"topple and break",
"repel and attract"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In fluid dynamics, wave setup is the increase in mean water level due to the presence of breaking waves. Similarly, wave setdown is a wave-induced decrease of the mean water level before the waves break (during the shoaling process). For short, the whole phenomenon is often denoted as wave setup, including both increase and decrease of mean elevation. This setup is primarily present in and near the coastal surf zone. Besides a spatial variation in the (mean) wave setup, also a variation in time may be present – known as surf beat – causing infragravity wave radiation.
Wave setup can be mathematically modeled by considering the variation in radiation stress. Radiation stress is the tensor of excess horizontal-momentum fluxes due to the presence of the waves.
In and near the coastal surf zone
As a progressive wave approaches shore and the water depth decreases, the wave height increases due to wave shoaling. As a result, there is additional wave-induced flux of horizontal momentum. The horizontal momentum equations of the mean flow requires this additional wave-induced flux to be balanced: this causes a decrease in the mean water level before the waves break, called a "setdown".
After the waves break, the wave energy flux is no longer constant, but decreasing due to energy dissipation. The radiation stress therefore decreases after the break point, causing a free surface level increase to balance: wave setup. Both of the above descriptions are specifically for beaches with mild bed slope.
Wave setup is particularly of concern during storm events, when the effects of big waves generated by wind from the storm are able to increase the mean sea level (by wave setup), enhancing the risks of damage to coastal infrastructure.
Wave setup value
The radiation stress pushes the water towards the coast, and is then pushed up, causing an increase in the water level. At a given moment, that increase is such
that its hydrostratic pressure is equal to the radiation stress. Fr
Document 2:::
Wave loading is most commonly the application of a pulsed or wavelike load to a material or object. This is most commonly used in the analysis of piping, ships, or building structures which experience wind, water, or seismic disturbances.
Examples of wave loading
Offshore storms and pipes: As large waves pass over shallowly buried pipes, water pressure increases above it. As the trough approaches, pressure over the pipe drops and this sudden and repeated variation in pressure can break pipes. The difference in pressure for a wave with wave height of about 10 m would be equivalent to one atmosphere (101.3 kPa or 14.7 psi) pressure variation between crest and trough and repeated fluctuations over pipes in relatively shallow environments could set up resonance vibrations within pipes or structures and cause problems.
Engineering oil platforms: The effects of wave-loading are a serious issue for engineers designing oil platforms, which must contend with the effects of wave loading, and have devised a number of algorithms to do so.
Document 3:::
Wind-wave dissipation or "swell dissipation" is process in which a wave generated via a weather system loses its mechanical energy transferred from the atmosphere via wind. Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, capillary gravity waves play an essential role in this effect, "wind waves" or "swell" are also known as surface gravity waves.
General physics and theory
The process of wind-wave dissipation can be explained by applying energy spectrum theory in a similar manner as for the formation of wind-waves (generally assuming spectral dissipation is a function of wave spectrum). However, although even some of recent innovative improvements for field observations (such as Banner & Babanin et al. ) have contributed to solve the riddles of wave breaking behaviors, unfortunately there hasn't been a clear understanding for exact theories of the wind wave dissipation process still yet because of its non-linear behaviors.
By past and present observations and derived theories, the physics of the ocean-wave dissipation can be categorized by its passing regions along to water depth. In deep water, wave dissipation occurs by the actions of friction or drag forces such as opposite-directed winds or viscous forces generated by turbulent flows—usually nonlinear forces. In shallow water, the behaviors of wave dissipations are mostly types of shore wave breaking (see Types of wave breaking).
Some of simple general descriptions of wind-wave dissipation (defined by Luigi Cavaleri et al. ) were proposed when we consider only ocean surface waves such as wind waves. By means of the simple, the interactions of waves with the vertical structure of the upper layers of the ocean are ignored for simplified theory in many proposed mechanisms.
Sources of wind-wave dissipation
In general understanding, the physics of wave dissipation can be categorized by considering with its dissipation sources, such as 1) wa
Document 4:::
Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold.
Examples
Two-dimensional electron gas
Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential.
Ocean dynamics
Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens when waves reach the shore?
A. diffuse and recede
B. surge and drown
C. topple and break
D. repel and attract
Answer:
|
|
sciq-6381
|
multiple_choice
|
What cell structures capture light energy from the sun and use it with water and carbon dioxide to produce sugars for food?
|
[
"chloroplasts",
"fibroblasts",
"nuclei",
"ribosomes"
] |
A
|
Relavent Documents:
Document 0:::
Transfer cells are specialized parenchyma cells that have an increased surface area, due to infoldings of the plasma membrane. They facilitate the transport of sugars from a sugar source, mainly mature leaves, to a sugar sink, often developing leaves or fruits. They are found in nectaries of flowers and some carnivorous plants.
Transfer cells are specially found in plants in the region of absorption or secretion of nutrients.
The term transfer cell was coined by Brian Gunning and John Stewart Pate. Their presence is generally correlated with the existence of extensive solute influxes across the plasma membrane.
Document 1:::
{{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete
Document 2:::
In contrast to the Cladophorales where nuclei are organized in regularly spaced cytoplasmic domains, the cytoplasm of Bryopsidales exhibits streaming, enabling transportation of organelles, transcripts and nutrients across the plant.
The Sphaeropleales also contain several common freshwat
Document 3:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 4:::
Disc shedding is the process by which photoreceptor cells in the retina are renewed. The disc formations in the outer segment of photoreceptors, which contain the photosensitive opsins, are completely renewed every ten days.
Photoreceptors
The retina contains two types of photoreceptor – rod cells and cone cells. There are about 6-7 million cones that mediate photopic vision, and they are concentrated in the macula at the center of the retina. There are about 120 million rods that are more sensitive than the cones and therefore mediate scotopic vision.
A vertebrate's photoreceptors are divided into three parts:
an outer segment that contains the photosensitive opsins
an inner segment that contains the cell's metabolic machinery (endoplasmic reticulum, Golgi complex, ribosomes, mitochondria)
a synaptic terminal at which contacts with second-order neurons of the retina are made
Discs
The photosensitive outer segment consists of a series of discrete membranous discs .
While in the rod, these discs lack any direct connection to the surface membrane (with the exception of a few recently formed basal discs that remain in continuity with the surface), the cone's photosensitive membrane is continuous with the surface membrane. The outer segment (OS) discs are densely packed with rhodopsin for high-sensitivity light detection. These discs are completely replaced once every ten days and this continuous renewal continues throughout the lifetime of the sighted animal.
After the opsins are synthesized, they fuse to the plasma membrane, which then invaginates with discs budding off internally, forming the tightly packed stacks of outer segment discs. From translation of opsin to formation of the discs takes just a couple of hours.
Shedding
Disc shedding was first described by RW Young in 1967. Discs mature along with their distal migration; aged discs shed at the distal tip and are engulfed by the neighboring retinal pigment epithelial (RPE) cells for degradation.
One e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What cell structures capture light energy from the sun and use it with water and carbon dioxide to produce sugars for food?
A. chloroplasts
B. fibroblasts
C. nuclei
D. ribosomes
Answer:
|
|
sciq-8859
|
multiple_choice
|
What is one function of the nervous system in humans?
|
[
"controlling thought",
"controlling muscles",
"producing hormones",
"controlling emotion"
] |
B
|
Relavent Documents:
Document 0:::
Physiological psychology is a subdivision of behavioral neuroscience (biological psychology) that studies the neural mechanisms of perception and behavior through direct manipulation of the brains of nonhuman animal subjects in controlled experiments. This field of psychology takes an empirical and practical approach when studying the brain and human behavior. Most scientists in this field believe that the mind is a phenomenon that stems from the nervous system. By studying and gaining knowledge about the mechanisms of the nervous system, physiological psychologists can uncover many truths about human behavior. Unlike other subdivisions within biological psychology, the main focus of psychological research is the development of theories that describe brain-behavior relationships.
Physiological psychology studies many topics relating to the body's response to a behavior or activity in an organism. It concerns the brain cells, structures, components, and chemical interactions that are involved in order to produce actions. Psychologists in this field usually focus their attention to topics such as sleep, emotion, ingestion, senses, reproductive behavior, learning/memory, communication, psychopharmacology, and neurological disorders. The basis for these studies all surround themselves around the notion of how the nervous system intertwines with other systems in the body to create a specific behavior.
Nervous system
The nervous system can be described as a control system that interconnects the other body systems. It consists of the brain, spinal cord, and other nerve tissues throughout the body. The system's primary function is to react to internal and external stimuli in the human body. It uses electrical and chemical signals to send out responses to different parts of the body, and it is made up of nerve cells called neurons. Through the system, messages are transmitted to body tissues such as a muscle. There are two major subdivisions in the nervous system known a
Document 1:::
There are yet unsolved problems in neuroscience, although some of these problems have evidence supporting a hypothesized solution, and the field is rapidly evolving. One major problem is even enumerating what would belong on a list such as this. However, these problems include:
Consciousness
Consciousness:
How can consciousness be defined?
What is the neural basis of subjective experience, cognition, wakefulness, alertness, arousal, and attention?
Quantum mind: Does quantum mechanical phenomena, such as entanglement and superposition, play an important part in the brain's function and can it explain critical aspects of consciousness?
Is there a "hard problem of consciousness"?
If so, how is it solved?
What, if any, is the function of consciousness?
What is the nature and mechanism behind near-death experiences?
How can death be defined? Can consciousness exist after death?
If consciousness is generated by brain activity, then how do some patients with physically deteriorated brains suddenly gain a brief moment of restored consciousness prior to death, a phenomenon known as terminal lucidity?
Problem of representation: How exactly does the mind function (or how does the brain interpret and represent information about the world)?
Bayesian mind: Does the mind make sense of the world by constantly trying to make predictions according to the rules of Bayesian probability?
Computational theory of mind: Is the mind a symbol manipulation system, operating on a model of computation, similar to a computer?
Connectionism: Can the mind be explained by mathematical models known as artificial neural networks?
Embodied cognition: Is the cognition of an organism affected by the organism's entire body (rather than just simply its brain), including its interactions with the environment?
Extended mind thesis: Does the mind not only exist in the brain, but also functions in the outside world by using physical objects as mental processes? Or just as prosthetic limbs can becom
Document 2:::
The following outline is provided as an overview of and topical guide to neuroscience:
Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another.
Branches of neuroscience
Neurophysiology
Neurophysiology is the study of the function (as opposed to structure) of the nervous system.
Brain mapping
Electrophysiology
Extracellular recording
Intracellular recording
Brain stimulation
Electroencephalography
Intermittent rhythmic delta activity
:Category: Neurophysiology
:Category: Neuroendocrinology
:Neuroendocrinology
Neuroanatomy
Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system.
Immunostaining
:Category: Neuroanatomy
Neuropharmacology
Neuropharmacology is the study of how drugs affect cellular function in the nervous system.
Drug
Psychoactive drug
Anaesthetic
Narcotic
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals.
Neuroethology
Developmental neuroscience
Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop.
Aging and memory
Cognitive neuroscience
Cognitive ne
Document 3:::
The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.
The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system.
Stimulus
Organisms need information to solve at least three kinds of problems: (a) to maintain an appropriate environment, i.e., homeostasis; (b) to time activities (e.g., seasonal changes in behavior) or synchronize activities with those of conspecifics; and (c) to locate and respond to resources or threats (e.g., by moving towards resources or evading or attacking threats). Organisms also need to transmit information in order to influence another's behavior: to identify themselves, warn conspecifics of danger, coordinate activities, or deceive.
Sensory systems code for four aspects of a stimulus; type (modality), intensity, location, and duration. Arrival time of a sound pulse and phase differences of continuous sound are used for sound localization. Certain receptors are sensitive to certain types of stimuli (for example, different mechanoreceptors respond best to different kinds of touch stimuli, like sharp or blunt objects). Receptors send impulses in certain patterns to send information about the intensity of a stimul
Document 4:::
The following diagram is provided as an overview of and topical guide to the human nervous system:
Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system.
Evolution of the human nervous system
Evolution of nervous systems
Evolution of human intelligence
Evolution of the human brain
Paleoneurology
Some branches of science that study the human nervous system
Neuroscience
Neurology
Paleoneurology
Central nervous system
The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord.
Spinal cord
Brain
Brain – center of the nervous system.
Outline of the human brain
List of regions of the human brain
Principal regions of the vertebrate brain:
Peripheral nervous system
Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS.
Sensory system
A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception.
List of sensory systems
Sensory neuron
Perception
Visual system
Auditory system
Somatosensory system
Vestibular system
Olfactory system
Taste
Pain
Components of the nervous system
Neuron
I
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is one function of the nervous system in humans?
A. controlling thought
B. controlling muscles
C. producing hormones
D. controlling emotion
Answer:
|
|
sciq-4388
|
multiple_choice
|
The energy a chemical reaction needs to get started is called what kind of energy?
|
[
"activation",
"fusion",
"function",
"conduction"
] |
A
|
Relavent Documents:
Document 0:::
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
Document 1:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
Document 2:::
In chemistry and particularly biochemistry, an energy-rich species (usually energy-rich molecule) or high-energy species (usually high-energy molecule) is a chemical species which reacts, potentially with other species found in the environment, to release chemical energy.
In particular, the term is often used for:
adenosine triphosphate (ATP) and similar molecules called high-energy phosphates, which release inorganic phosphate into the environment in an exothermic reaction with water:
ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol)
fuels such as hydrocarbons, carbohydrates, lipids, proteins, and other organic molecules which react with oxygen in the environment to ultimately form carbon dioxide, water, and sometimes nitrogen, sulfates, and phosphates
molecular hydrogen
monatomic oxygen, ozone, hydrogen peroxide, singlet oxygen and other metastable or unstable species which spontaneously react without further reactants
in particular, the vast majority of free radicals
explosives such as nitroglycerin and other substances which react exothermically without requiring a second reactant
metals or metal ions which can be oxidized to release energy
This is contrasted to species that are either part of the environment (this sometimes includes diatomic triplet oxygen) or do not react with the environment (such as many metal oxides or calcium carbonate); those species are not considered energy-rich or high-energy species.
Alternative definitions
The term is often used without a definition. Some authors define the term "high-energy" to be equivalent to "chemically unstable", while others reserve the term for high-energy phosphates, such as the Great Soviet Encyclopedia which defines the term "high-energy compounds" to refer exclusively to those.
The IUPAC glossary of terms used in ecotoxicology defines a primary producer as an "organism capable of using the energy derived from light or a chemical substance in order to manufacture energy-rich organic compou
Document 3:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
Document 4:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The energy a chemical reaction needs to get started is called what kind of energy?
A. activation
B. fusion
C. function
D. conduction
Answer:
|
|
sciq-10235
|
multiple_choice
|
What are microfilaments made out of?
|
[
"two DNA chains",
"two actin chains",
"two microscopy chains",
"two halophilic chains"
] |
B
|
Relavent Documents:
Document 0:::
In cell biology, microtrabeculae were a hypothesised fourth element of the cytoskeleton (the other three being microfilaments, microtubules and intermediate filaments), proposed by Keith Porter based on images obtained from high-voltage electron microscopy of whole cells in the 1970s. The images showed short, filamentous structures of unknown molecular composition associated with known cytoplasmic structures. It is now generally accepted that microtrabeculae are nothing more than an artifact of certain types of fixation treatment, although the complexity of the cell's cytoskeleton is not yet fully understood.
Document 1:::
A microfibril is a very fine fibril, or fiber-like strand, consisting of glycoproteins and cellulose. It is usually, but not always, used as a general term in describing the structure of protein fiber, e.g. hair and sperm tail. Its most frequently observed structural pattern is the 9+2 pattern in which two central protofibrils are surrounded by nine other pairs. Cellulose inside plants is one of the examples of non-protein compounds that are using this term with the same purpose. Cellulose microfibrils are laid down in the inner surface of the primary cell wall. As the cell absorbs water, its volume increases and the existing microfibrils separate and new ones are formed to help increase cell strength.
Synthesis and function
Cellulose is synthesized by cellulose synthase or Rosette terminal complexes which reside on a cells membrane. As cellulose fibrils are synthesized and grow extracellularly they push up against neighboring cells. Since the neighboring cell can not move easily the Rosette complex is instead pushed around the cell through the fluid phospholipid membrane. Eventually this results in the cell becoming wrapped in a microfibril layer. This layer becomes the cell wall. The organization of microfibrils forming the primary cell wall is rather disorganized. However, another mechanism is used in secondary cell walls leading to its organization. Essentially, lanes on the secondary cell wall are built with microtubules. These lanes force microfibrils to remain in a certain area while they wrap. During this process microtubules can spontaneously depolymerize and repolymerize in a different orientation. This leads to a different direction in which the cell continues getting wrapped.
Fibrillin microfibrils are found in connective tissues, which mainly makes up fibrillin-1 and provides elasticity. During the assembly, mirofibrils exhibit a repeating stringed-beads arrangement produced by the cross-linking of molecules forming a striated pattern with a given
Document 2:::
Microglobulin is a globulin of relatively small molecular weight. It can be contrasted to macroglobulin.,
Examples include:
Beta-2 microglobulin
Alpha-1-microglobulin
Document 3:::
Insect pins are used by entomologists for mounting collected insects.
They can also be used in dressmaking for very fine silk or antique fabrics.
As standard, they are long and come in sizes from 000 (the smallest diameter), through 00, 0, and 1, to 8 (the largest diameter).
The most generally useful size in entomology is size 2, which is in diameter, with sizes 1 and 3 being the next most useful.
They were once commonly made from brass or silver, but these would corrode from contact with insect bodies and are no longer commonly used.
Instead they are nickel-plated brass, yielding "white" or "black" enameling, or even made from stainless steel.
Similarly, the smallest sizes from 000 to 1 used to be impractical for mounting until plastic and polyethylene became commonly used for pinning bases.
There are also micro-pins, which are long.
minutens are headless micropins that are generally only made of stainless steel, used for double-mounting, where the insect is mounted on the minuten, which is pinned to a small block of soft material, which is in turn mounted on a standard, larger, insect pin.
Document 4:::
A microtome (from the Greek mikros, meaning "small", and temnein, meaning "to cut") is a cutting tool used to produce extremely thin slices of material known as sections, with the process being termed microsectioning. Important in science, microtomes are used in microscopy for the preparation of samples for observation under transmitted light or electron radiation.
Microtomes use steel, glass or diamond blades depending upon the specimen being sliced and the desired thickness of the sections being cut. Steel blades are used to prepare histological sections of animal or plant tissues for light microscopy. Glass knives are used to slice sections for light microscopy and to slice very thin sections for electron microscopy. Industrial grade diamond knives are used to slice hard materials such as bone, teeth and tough plant matter for both light microscopy and for electron microscopy. Gem-quality diamond knives are also used for slicing thin sections for electron microscopy.
Microtomy is a method for the preparation of thin sections for materials such as bones, minerals and teeth, and an alternative to electropolishing and ion milling. Microtome sections can be made thin enough to section a human hair across its breadth, with section thickness between 50 nm and 100 μm.
History
In the beginnings of light microscope development, sections from plants and animals were manually prepared using razor blades. It was found that to observe the structure of the specimen under observation it was important to make clean reproducible cuts on the order of 100 μm, through which light can be transmitted. This allowed for the observation of samples using light microscopes in a transmission mode.
One of the first devices for the preparation of such cuts was invented in 1770 by George Adams, Jr. (1750–1795) and further developed by Alexander Cummings. The device was hand operated, and the sample held in a cylinder and sections created from the top of the sample using a hand crank.
In
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are microfilaments made out of?
A. two DNA chains
B. two actin chains
C. two microscopy chains
D. two halophilic chains
Answer:
|
|
sciq-10687
|
multiple_choice
|
What bodily substance is formed from cells, and in turn helps make up organs?
|
[
"tendons",
"muscles",
"tissues",
"ligaments"
] |
C
|
Relavent Documents:
Document 0:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 1:::
Outline
h1.00: Cytology
h2.00: General histology
H2.00.01.0.00001: Stem cells
H2.00.02.0.00001: Epithelial tissue
H2.00.02.0.01001: Epithelial cell
H2.00.02.0.02001: Surface epithelium
H2.00.02.0.03001: Glandular epithelium
H2.00.03.0.00001: Connective and supportive tissues
H2.00.03.0.01001: Connective tissue cells
H2.00.03.0.02001: Extracellular matrix
H2.00.03.0.03001: Fibres of connective tissues
H2.00.03.1.00001: Connective tissue proper
H2.00.03.1.01001: Ligaments
H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue
H2.00.03.3.00001: Reticular tissue
H2.00.03.4.00001: Adipose tissue
H2.00.03.5.00001: Cartilage tissue
H2.00.03.6.00001: Chondroid tissue
H2.00.03.7.00001: Bone tissue; Osseous tissue
H2.00.04.0.00001: Haemotolymphoid complex
H2.00.04.1.00001: Blood cells
H2.00.04.1.01001: Erythrocyte; Red blood cell
H2.00.04.1.02001: Leucocyte; White blood cell
H2.00.04.1.03001: Platelet; Thrombocyte
H2.00.04.2.00001: Plasma
H2.00.04.3.00001: Blood cell production
H2.00.04.4.00001: Postnatal sites of haematopoiesis
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 4:::
The human body is the structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. They ensure homeostasis and the viability of the human body.
It comprises a head, hair, neck, torso (which includes the thorax and abdomen), arms and hands, legs and feet.
The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar and oxygen in the blood.
The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work.
Composition
The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body.
The adult male body is about 60% water for a total water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates.
Cells
The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 3037trillion cells in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The body is also host to about the same number of non-human cells as well as multicellular organisms which reside in the gastrointestinal tract and on the skin. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What bodily substance is formed from cells, and in turn helps make up organs?
A. tendons
B. muscles
C. tissues
D. ligaments
Answer:
|
|
sciq-7811
|
multiple_choice
|
What causes blood vessels to grow toward the tumour in cancer cells?
|
[
"harnessing molecules",
"addressing molecules",
"signaling molecules",
"communicating molecules"
] |
C
|
Relavent Documents:
Document 0:::
The collective–amoeboid transition (CMT) is a process by which collective multicellular groups dissociate into amoeboid single cells following the down-regulation of integrins. CMTs contrast with epithelial–mesenchymal transitions (EMT) which occur following a loss of E-cadherin. Like EMTs, CATs are involved in the invasion of tumor cells into surrounding tissues, with amoeboid movement more likely to occur in soft extracellular matrix (ECM) and mesenchymal movement in stiff ECM. Although once differentiated, cells typically do not change their migration mode, EMTs and CMTs are highly plastic with cells capable of interconverting between them depending on intracelluar regulatory signals and the surrounding ECM.
CATs are the least common transition type in invading tumor cells, although they are noted in melanoma explants.
See also
Collective cell migration
Dedifferentiation
Invasion (cancer)
Document 1:::
VEGF receptors (VEGFRs) are receptors for vascular endothelial growth factor (VEGF). There are three main subtypes of VEGFR, numbered 1, 2 and 3. Depending on alternative splicing, they may be membrane-bound (mbVEGFR) or soluble (sVEGFR).
Inhibitors of VEGFR are used in the treatment of cancer.
VEGF
Vascular endothelial growth factor (VEGF) is an important signaling protein involved in both vasculogenesis (the formation of the circulatory system) and angiogenesis (the growth of blood vessels from pre-existing vasculature). As its name implies, VEGF activity is restricted mainly to cells of the vascular endothelium, although it does have effects on a limited number of other cell types (e.g. stimulation monocyte/macrophage migration). In vitro, VEGF has been shown to stimulate endothelial cell mitogenesis and cell migration. VEGF also enhances microvascular permeability and is sometimes referred to as vascular permeability factor.
Receptor biology
All members of the VEGF family stimulate cellular responses by binding to tyrosine kinase receptors (the VEGFRs) on the cell surface, causing them to dimerize and become activated through transphosphorylation. The VEGF receptors have an extracellular portion consisting of 7 immunoglobulin-like domains, a single transmembrane spanning region and an intracellular portion containing a split tyrosine-kinase domain.
VEGF-A binds to VEGFR-1 (Flt-1) and VEGFR-2 (KDR/Flk-1). VEGFR-2 appears to mediate almost all of the known cellular responses to VEGF. The function of VEGFR-1 is less well defined, although it is thought to modulate VEGFR-2 signaling. Another function of VEGFR-1 is to act as a dummy/decoy receptor, sequestering VEGF from VEGFR-2 binding (this appears to be particularly important during vasculogenesis in the embryo). In fact, an alternatively spliced form of VEGFR-1 (sFlt1) is not a membrane bound protein but is secreted and functions primarily as a decoy. A third receptor has been discovered (VEGFR-3), however
Document 2:::
Angiocrine growth factors are molecules found in blood vessels' endothelial cells that can stimulate organ-specific repair activities in damaged or diseased organs. Endothelial cells possess tissue-specific genes that code for unique growth factors, adhesion molecules and factors regulating metabolism.
The discovery emerged after the entirety of active genes in endothelial cells was decoded, resulting in an atlas of organ-specific blood vessel cells. The atlas documented hundreds of already-known genes that had never been associated with these cells. Organs dictate the structure and function of their own blood vessels, including the repair molecules they secrete. Each organ produces blood vessels with unique shape and function that comply that organ's metabolic demands.
Organ repair
When an organ is injured, its blood vessels may not be able to repair the damage because they may themselves be damaged or inflamed. An infusion of engineered endothelial cells may be able to engraft into injured tissue and acquire the capacity to repair the organ.
Endothelial cells generated from mouse embryonic stem cells were functional, transplantable and responsive to microenvironmental signals. Such cells can be transplanted into different tissues, become educated by the tissue and acquire the characteristic phenotype of that organ type's endothelials. Such cells were transplanted into the liver and kidney of mice and found became indistinguishable from existing endothelial cells.
In a clinical setting the cells must be immunocompatible with the recipient patient. They could be derived from the patient's embryonic pluripotent stem cells as well as by somatic cell nuclear transfer (SCNT). In SCNT the nucleus is introduced into a human egg producing embryonic stem cells that are a genetic match of the patient. Another approach takes cells discarded after a diagnostic prenatal amniocentesis.
Additional preclinical investigation is required before investigation with humans.
Document 3:::
HP59 is a pathologic angiogenesis capillary endothelial marker protein (7 or 12 transmembrane domains) which has been identified as the receptor for the Group B Streptococcal Toxin (GBS Toxin) molecule known as CM101, the etiologic agent for early-onset versus late-onset Group B Strep.
Expression
Fu, et al. coined the term "pathological angiogenesis" to distinguish between HP59-expressing, and non-HP59-expressing capillaries, however, other researchers have not used this terminology. Therefore it is not yet known whether HP59 is expressed in vasculogenesis, arteriogenesis, sprouting angiogenesis or intussusceptive angiogenesis. However capillaries in all tumor tissues examined were positive for anti-HP59 antibodies and Von Willebrand factor (vWF) antibodies, while in normal tissues only vWF staining was observed.
The target protein for GBStoxin/CM101 is expressed in vasculature of developing organs during their formation during embryogenesis. The lung is the last organ to develop so HP59 is present in the newborn lung for 5–10 days after birth, explaining the susceptibility to GBS induced "early onset" disease. HP59 lectin is expressed later in life only in pathologic angiogenesis, providing a receptor for CM101. The CM101-HP59 complex then activates complement, and initiates an inflammatory cytokine cascade which recruits CD69 positive activated granulocytes to destroy the capillaries and surrounding pathologic tissue. CM101 has been shown in a published Phase I, FDA-approved clinical trial under IND to have clinical safety and effectivity on select stage IV cancer patients, specifically targeting tumor vasculature
HP59 is expressed in the adult in wound healing, and in tumor angiogenesis, as shown in mice.
The gene for HP59 contains, entirely within its coding region, the Sialin Gene SLC17A5 (Solute carrier family 17 (anion/sugar transporter). Member 5, also known asSLC17A5 or sialin is a lysosomal membrane sialic acid transport protein which in humans is
Document 4:::
The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. Endothelial cells form the barrier between vessels and tissue and control the flow of substances and fluid into and out of a tissue.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between v
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What causes blood vessels to grow toward the tumour in cancer cells?
A. harnessing molecules
B. addressing molecules
C. signaling molecules
D. communicating molecules
Answer:
|
|
sciq-5977
|
multiple_choice
|
What type of behavioral rhythms are linked to the yearly cycle of seasons?
|
[
"monthly",
"annual",
"circannual",
"biannual"
] |
C
|
Relavent Documents:
Document 0:::
Diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. The common adjective used for daytime activity is "diurnal". The timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. Animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral.
Plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths.
In animals
Many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. Commonly classified diurnal animals include mammals, birds, and reptiles. Most primates are diurnal, including humans. Scientifically classifying diurnality within animals can be a challenge, apart from the obvious increased activity levels during the day time light.
Evolution of diurnality
Initially, most animals were diurnal, but adaptations that allowed some animals to become nocturnal is what helped contribute to the success of many, especially mammals. This evolutionary movement to nocturnality allowed them to better avoid predators and gain resources with less competition from other animals. This did come with some adaptations that mammals live with today. Visi
Document 1:::
In chronobiology, an infradian rhythm is a rhythm with a period longer than the period of a circadian rhythm, i.e., with a frequency of less than one cycle in 24 hours. Some examples of infradian rhythms in mammals include menstruation, breeding, migration, hibernation, molting and fur or hair growth, and tidal or seasonal rhythms. In contrast, ultradian rhythms have periods shorter than the period of a circadian rhythm. Several infradian rhythms are known to be caused by hormone stimulation or exogenous factors. For example, seasonal depression, an example of an infradian rhythm occurring once a year, can be caused by the systematic lowering of light levels during the winter.
See also
Photoperiodicity
Document 2:::
Biological rhythms are repetitive biological processes. Some types of biological rhythms have been described as biological clocks. They can range in frequency from microseconds to less than one repetitive event per decade. Biological rhythms are studied by chronobiology. In the biochemical context biological rhythms are called biochemical oscillations.
The variations of the timing and duration of biological activity in living organisms occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms).
Circadian rhythm
The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks.
The circadian rhythm can further be broken down into routine cycles during the 24-hour day:
Diurnal, which describes organisms active during daytime
Nocturnal, which describes organisms active in the night
Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: white-tailed deer, some bats)
While circadian rhythms are defined as regulated by endogenous processes, other biological cycles may be regulated by exogenous signals. In some cases, multi-trophic systems may exhibit rhythms driven by the circadian clock of one of the members (which may also be influenced or reset by external factors). The endogenous plant cycles may regulate the activity of the bacterium by controlling availability of plant-produced photosynthate.
Other cycles
Many other important cycles are also studied, includin
Document 3:::
The biorhythm theory is the pseudoscientific idea that peoples' daily lives are significantly affected by rhythmic cycles with periods of exactly 23, 28 and 33 days, typically a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle. The idea was developed by Wilhelm Fliess in the late 19th century, and was popularized in the United States in the late 1970s. The proposal has been independently tested and, consistently, no validity for it has been found.
According to the notion of biorhythms, a person's life is influenced by rhythmic biological cycles that affect his or her ability in various domains, such as mental, physical, and emotional activity. These cycles begin at birth and oscillate in a steady (sine wave) fashion throughout life, and by modeling them mathematically, it is suggested that a person's level of ability in each of these domains can be predicted from day to day. It is built on the idea that the biofeedback chemical and hormonal secretion functions within the body could show a sinusoidal behavior over time.
Most biorhythm models use three cycles: a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle. Although the 28-day cycle is the same length as the average woman's menstrual cycle and was originally described as a "female" cycle (see below), the two are not necessarily in synchronization. Each of these cycles varies between high and low extremes sinusoidally, with days where the cycle crosses the zero line described as "critical days" of greater risk or uncertainty.
The numbers from +100% (maximum) to -100% (minimum) indicate where on each cycle the rhythms are on a particular day. In general, a rhythm at 0% is crossing the midpoint and is thought to have no real impact on one's life, whereas a rhythm at +100% (at the peak of that cycle) would give one an edge in that area, and a rhythm at -100% (at the bottom of that cycle) would make life more difficult in that area. There is no particul
Document 4:::
A circannual cycle is a biological process that occurs in living creatures over the period of approximately one year. This cycle was first discovered by Ebo Gwinner and Canadian biologist Ted Pengelley. It is classified as an Infradian rhythm, which is biological process with a period longer than that of a circadian rhythm, less than one cycle per 24 hours. These processes continue even in artificial environments in which seasonal cues have been removed by scientists. The term circannual is Latin, circa meaning approximately and annual relating to one year. Chronobiology is the field of biology pertaining to periodic rhythms that occur in living organisms in response to external stimuli such as photoperiod.
Cycles come from genetic evolution in animals which allows them to create regulatory cycles to improve their fitness. Evolution for these traits comes from the increased reproductive success of animals most capable of predicting the regular changes in the environment like seasonal changes and adapt capitalize on the times when success was greatest. The idea of evolved biological clocks exists not only for animals but also in plant species which exhibit cyclic behaviors without environmental cues. Plentiful research has been done on the biological clocks and what behaviors they are responsible for in animals, circannual rhythms are just one example of a biological clock.
Rhythms are driven by hormone cycles and seasonal rhythms can endure for long periods of time in animals even without photoperiod signaling which comes with seasonal changes. They are a driver of annual behaviors such as hibernation, mating and the gain or loss of weight for seasonal changes. Circannual cycles can be defined by three main aspects being that they must persist without apparent time cues, be able to be phase shifted, and should not be changed by temperature. Circannual cycles have important impacts on when animal behaviors are performed and the success of those behaviors. Circannu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of behavioral rhythms are linked to the yearly cycle of seasons?
A. monthly
B. annual
C. circannual
D. biannual
Answer:
|
|
sciq-6129
|
multiple_choice
|
Alchemy helped improve the study of metallurgy and the extraction of metals from what?
|
[
"air",
"wood",
"water",
"ores"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Major innovations in materials technology
BC
28,000 BC – People wear beads, bracelets, and pendants
14,500 BC – First pottery, made by the Jōmon people of Japan.
6th millennium BC – Copper metallurgy is invented and copper is used for ornamentation (see Pločnik article)
2nd millennium BC – Bronze is used for weapons and armor
16th century BC – The Hittites develop crude iron metallurgy
13th century BC – Invention of steel when iron and charcoal are combined properly
10th century BC – Glass production begins in ancient Near East
1st millennium BC – Pewter beginning to be used in China and Egypt
1000 BC – The Phoenicians introduce dyes made from the purple murex.
3rd century BC – Wootz steel, the first crucible steel, is invented in ancient India
50s BC – Glassblowing techniques flourish in Phoenicia
20s BC – Roman architect Vitruvius describes low-water-content method for mixing concrete
1st millennium
3rd century – Cast iron widely used in Han Dynasty China
300 – Greek alchemist Zomius, summarizing the work of Egyptian alchemists, describes arsenic and lead acetate
4th century – Iron pillar of Delhi is the oldest surviving example of corrosion-resistant steel
8th century – Porcelain is invented in Tang Dynasty China
8th century – Tin-glazing of ceramics invented by Muslim chemists and potters in Basra, Iraq
9th century – Stonepaste ceramics invented in Iraq
900 – First systematic classification of chemical substances appears in the works attributed to Jābir ibn Ḥayyān (Latin: Geber) and in those of the Persian alchemist and physician Abū Bakr al-Rāzī ( 865–925, Latin: Rhazes)
900 – Synthesis of ammonium chloride from organic substances described in the works attributed to Jābir ibn Ḥayyān (Latin: Geber)
900 – Abū Bakr al-Rāzī describes the preparation of plaster of Paris and metallic antimony
9th century – Lustreware appears in Mesopotamia
2nd millennium
1000 – Gunpowder is developed in China
1340 – In Liège, Belgium, the first blast furnaces for the production
Document 2:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
Document 3:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Alchemy helped improve the study of metallurgy and the extraction of metals from what?
A. air
B. wood
C. water
D. ores
Answer:
|
|
ai2_arc-828
|
multiple_choice
|
What forms both valleys and canyons?
|
[
"glaciers",
"rivers",
"wind",
"tides"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The Arkansas River Valley (usually shortened to River Valley) is a region in Arkansas defined by the Arkansas River in the western part of the state. Generally defined as the area between the Ozark and Ouachita Mountains, the River Valley is characterized by flat lowlands covered in fertile farmland and lakes periodically interrupted by high peaks. Mount Magazine, Mount Nebo, and Petit Jean Mountain compose the Tri-Peaks Region, a further subdivision of the River Valley popular with hikers and outdoors enthusiasts. In addition to the outdoor recreational activities available to residents and visitors of the region, the River Valley contains Arkansas's wine country as well as hundreds of historical sites throughout the area. It is one of six natural divisions of Arkansas.
Definition
The Arkansas River Valley is informally defined along county boundaries, including all of Logan and Sebastian counties and portions of Conway, Franklin, Johnson, Perry, Pope, and Yell counties.
Subdivisions
Arkansas Valley Hills - North and east of the Arkansas River, sometimes associated with the Ozarks
Bottomlands - Low swamps and prairies along the Arkansas River itself, wide in some places
Fort Smith metropolitan area - Sebastian, Crawford, and Franklin counties in Arkansas (also includes Le Fore and Sequoyah counties in Oklahoma)
Ozark National Forest - a small, discontinuous portion of the federally protected area is within the region
Tri-peaks Region - Region punctuated by three steep mountains: Mount Magazine, Mount Nebo and Petit Jean Mountain
Valley - south of the Arkansas River, level plains and gently rolling hills
Wine Country - American Viticultural Area near Altus
History
In the Pre-Colonial era, the River Valley was inhabited by Native American tribes, including Caddo, Cherokee, Choctaw, Osage, Tunica, and Quapaw tribes. Most first encounters describe scattered villages and individual farmsteads in the River Valley, unlike the organized "towns" and groves and o
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
A grassed waterway is a to 48-metre-wide (157 ft) native grassland strip of green belt. It is generally installed in the thalweg, the deepest continuous line along a valley or watercourse, of a cultivated dry valley in order to control erosion. A study carried out on a grassed waterway during 8 years in Bavaria showed that it can lead to several other types of positive impacts, e.g. on biodiversity.
Distinctions
Confusion between "grassed waterway" and "vegetative filter strips" should be avoided. The latter are generally narrower (only a few metres wide) and rather installed along rivers as well as along or within cultivated fields. However, buffer strip can be a synonym, with shrubs and trees added to the plant component, as does a riparian zone.
Runoff and erosion mitigation
Runoff generated on cropland during storms or long winter rains concentrates in the thalweg where it can lead to rill or gully erosion.
Rills and gullies further concentrate runoff and speed up its transfer, which can worsen damage occurring downstream. This can result in a muddy flood.
In this context, a grassed waterway allows increasing soil cohesion and roughness. It also prevents the formation of rills and gullies. Furthermore, it can slow down runoff and allow its re-infiltration during long winter rains. In contrast, its infiltration capacity is generally not sufficient to reinfiltrate runoff produced by heavy spring and summer storms. It can therefore be useful to combine it with extra measures, like the installation of earthen dams across the grassed waterway, in order to buffer runoff temporarily.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What forms both valleys and canyons?
A. glaciers
B. rivers
C. wind
D. tides
Answer:
|
|
sciq-9146
|
multiple_choice
|
What type of fertilization do most reptiles use to reproduce?
|
[
"mechanical",
"internal",
"asexual",
"external"
] |
B
|
Relavent Documents:
Document 0:::
An associated reproductive pattern is a seasonal change in reproduction which is highly correlated with a change in gonad and associated hormone.
Notable Model Organisms
Parthenogenic Whiptail Lizards
Document 1:::
External fertilization is a mode of reproduction in which a male organism's sperm fertilizes a female organism's egg outside of the female's body.
It is contrasted with internal fertilization, in which sperm are introduced via insemination and then combine with an egg inside the body of a female organism. External fertilization typically occurs in water or a moist area to facilitate the movement of sperm to the egg. The release of eggs and sperm into the water is known as spawning. In motile species, spawning females often travel to a suitable location to release their eggs.
However, sessile species are less able to move to spawning locations and must release gametes locally. Among vertebrates, external fertilization is most common in amphibians and fish. Invertebrates utilizing external fertilization are mostly benthic, sessile, or both, including animals such as coral, sea anemones, and tube-dwelling polychaetes. Benthic marine plants also use external fertilization to reproduce. Environmental factors and timing are key challenges to the success of external fertilization. While in the water, the male and female must both release gametes at similar times in order to fertilize the egg. Gametes spawned into the water may also be washed away, eaten, or damaged by external factors.
Sexual selection
Sexual selection may not seem to occur during external fertilization, but there are ways it actually can. The two types of external fertilizers are nest builders and broadcast spawners. For female nest builders, the main choice is the location of where to lay her eggs. A female can choose a nest close to the male she wants to fertilize her eggs, but there is no guarantee that the preferred male will fertilize any of the eggs. Broadcast spawners have a very weak selection, due to the randomness of releasing gametes. To look into the effect of female choice on external fertilization, an in vitro sperm competition experiment was performed. The results concluded that ther
Document 2:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 3:::
Sexual characteristics are physical traits of an organism (typically of a sexually dimorphic organism) which are indicative of or resultant from biological sexual factors. These include both primary sex characteristics, such as gonads, and secondary sex characteristics.
Humans
In humans, sex organs or primary sexual characteristics, which are those a person is born with, can be distinguished from secondary sex characteristics, which develop later in life, usually during puberty. The development of both is controlled by sex hormones produced by the body after the initial fetal stage where the presence or absence of the Y-chromosome and/or the SRY gene determine development.
Male primary sex characteristics are the penis, the scrotum and the ability to ejaculate when matured. Female primary sex characteristics are the vagina, uterus, fallopian tubes, clitoris, cervix, and the ability to give birth and menstruate when matured.
Hormones that express sexual differentiation in humans include:
estrogens
progesterone
androgens such as testosterone
The following table lists the typical sexual characteristics in humans (even though some of these can also appear in other animals as well):
Other organisms
In invertebrates and plants, hermaphrodites (which have both male and female reproductive organs either at the same time or during their life cycle) are common, and in many cases, the norm.
In other varieties of multicellular life (e.g. the fungi division, Basidiomycota) sexual characteristics can be much more complex, and may involve many more than two sexes. For details on the sexual characteristics of fungi, see: Hypha and Plasmogamy.
Secondary sex characteristics in non-human animals include manes of male lions, long tail feathers of male peafowl, the tusks of male narwhals, enlarged proboscises in male elephant seals and proboscis monkeys, the bright facial and rump coloration of male mandrills, and horns in many goats and antelopes.
See also
Mammalian gesta
Document 4:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of fertilization do most reptiles use to reproduce?
A. mechanical
B. internal
C. asexual
D. external
Answer:
|
|
sciq-4472
|
multiple_choice
|
What is the basic structure that holds plants upright, allowing plants to get the sunlight and air they need?
|
[
"stamen",
"twig",
"stem",
"root"
] |
C
|
Relavent Documents:
Document 0:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 1:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 2:::
Stem succulents are fleshy succulent columnar shaped plants which conduct photosynthesis mainly through their stems rather than their leaves. These plants are defined by their succulent stems and have evolved to have similar forms by convergent evolution to occupy similar niches.
Description
Stem succulents are succulent plants defined by their succulent stems, which function to store water and conduct photosynthesis. These plants, like many others native to hot dessert regions, undergo CAM photosynthesis, an alternative metabolic pathway where the plants' stomata open to exchange gasses and fix almost exclusively at night. Their leaves are absent or highly reduced, instead forming protective spines or thorns to deter herbivores and collect drip condensed water vapor at night.
Stem succulents are related by form, but not by evolution. They evolved to have similar forms and physiological characteristics by convergent evolution. Examples are tall thin Euphorbias from deserts and arid regions of southern African and Madagascar, similarly shaped cacti from North America and South America, which occupy a similar xeric evolutionary niche, and members of two genera of the family Asclepiadaceae (Hoodia and Stapelia).
Document 3:::
Cell mechanics is a sub-field of biophysics that focuses on the mechanical properties and behavior of living cells and how it relates to cell function. It encompasses aspects of cell biophysics, biomechanics, soft matter physics and rheology, mechanobiology and cell biology.
Eukaryotic
Eukaryotic cells are cells that consist of membrane-bound organelles, a membrane-bound nucleus, and more than one linear chromosome. Being much more complex than prokaryotic cells, cells without a true nucleus, eukaryotes must protect its organelles from outside forces.
Plant
Plant cell mechanics combines principles of biomechanics and mechanobiology to investigate the growth and shaping of the plant cells. Plant cells, similar to animal cells, respond to externally applied forces, such as by reorganization of their cytoskeletal network. The presence of a considerably rigid extracellular matrix, the cell wall, however, bestows the plant cells with a set of particular properties. Mainly, the growth of plant cells is controlled by the mechanics and chemical composition of the cell wall. A major part of research in plant cell mechanics is put toward the measurement and modeling of the cell wall mechanics to understand how modification of its composition and mechanical properties affects the cell function, growth and morphogenesis.
Animal
Because animal cells do not have cell walls to protect them like plant cells, they require other specialized structures to sustain external mechanical forces. All animal cells are encased within a cell membrane made of a thin lipid bilayer that protects the cell from exposure to the outside environment. Using receptors composed of protein structures, the cell membrane is able to let selected molecules within the cell. Inside the cell membrane includes the cytoplasm, which contains the cytoskeleton. A network of filamentous proteins including microtubules, intermediate filaments, and actin filaments makes up the cytoskeleton and helps maintain th
Document 4:::
Plant stem cells
Plant stem cells are innately undifferentiated cells located in the meristems of plants. Plant stem cells serve as the origin of plant vitality, as they maintain themselves while providing a steady supply of precursor cells to form differentiated tissues and organs in plants. Two distinct areas of stem cells are recognised: the apical meristem and the lateral meristem.
Plant stem cells are characterized by two distinctive properties, which are: the ability to create all differentiated cell types and the ability to self-renew such that the number of stem cells is maintained. Plant stem cells never undergo aging process but immortally give rise to new specialized and unspecialized cells, and they have the potential to grow into any organ, tissue, or cell in the body. Thus they are totipotent cells equipped with regenerative powers that facilitate plant growth and production of new organs throughout lifetime.
Unlike animals, plants are immobile. As plants cannot escape from danger by taking motion, they need a special mechanism to withstand various and sometimes unforeseen environmental stress. Here, what empowers them to withstand harsh external influence and preserve life is stem cells. In fact, plants comprise the oldest and the largest living organisms on earth, including Bristlecone Pines in California, U.S. (4,842 years old), and the Giant Sequoia in mountainous regions of California, U.S. (87 meters in height and 2,000 tons in weight). This is possible because they have a modular body plan that enables them to survive substantial damage by initiating continuous and repetitive formation of new structures and organs such as leaves and flowers.
Plant stem cells are also characterized by their location in specialized structures called meristematic tissues, which are located in root apical meristem (RAM), shoot apical meristem (SAM), and vascular system ((pro)cambium or vascular meristem.)
Research and development
Traditionally, plant stem ce
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the basic structure that holds plants upright, allowing plants to get the sunlight and air they need?
A. stamen
B. twig
C. stem
D. root
Answer:
|
|
sciq-8011
|
multiple_choice
|
Patients with familial hypercholesterolemia have life-threatening levels of cholesterol because their cells cannot clear what particles from their blood?
|
[
"oxygen (O)",
"low-density lipoprotein (ldl)",
"iron (Fe)",
"high - density lipoprotein (hdl)"
] |
B
|
Relavent Documents:
Document 0:::
Atherosclerosis is a pattern of the disease arteriosclerosis, characterized by development of abnormalities called lesions in walls of arteries. These lesions may lead to narrowing of the arteries' walls due to buildup of atheromatous plaques. At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age. In severe cases, it can result in coronary artery disease, stroke, peripheral artery disease, or kidney disorders, depending on which body parts(s) the affected arteries are located in the body.
The exact cause of atherosclerosis is unknown and is proposed to be multifactorial. Risk factors include abnormal cholesterol levels, elevated levels of inflammatory biomarkers, high blood pressure, diabetes, smoking (both active and passive smoking), obesity, genetic factors, family history, lifestyle habits, and an unhealthy diet. Plaque is made up of fat, cholesterol, calcium, and other substances found in the blood. The narrowing of arteries limits the flow of oxygen-rich blood to parts of the body. Diagnosis is based upon a physical exam, electrocardiogram, and exercise stress test, among others.
Prevention is generally by eating a healthy diet, exercising, not smoking, and maintaining a normal weight. Treatment of established disease may include medications to lower cholesterol such as statins, blood pressure medication, or medications that decrease clotting, such as aspirin. A number of procedures may also be carried out such as percutaneous coronary intervention, coronary artery bypass graft, or carotid endarterectomy.
Atherosclerosis generally starts when a person is young and worsens with age. Almost all people are affected to some degree by the age of 65. It is the number one cause of death and disability in developed countries. Though it was first described in 1575, there is evidence that the condition occurred in people more than 5,000 years ago.
Signs and symptoms
Atherosclerosis is asymptomatic for decades because
Document 1:::
This list consists of common foods with their cholesterol content recorded in milligrams per 100 grams (3.5 ounces) of food.
Functions
Cholesterol is a sterol, a steroid-like lipid made by animals, including humans. The human body makes one-eighth to one-fourth teaspoons of pure cholesterol daily. A cholesterol level of 5.5 millimoles per litre or below is recommended for an adult. The rise of cholesterol in the body can give a condition in which excessive cholesterol is deposited in artery walls called atherosclerosis. This condition blocks the blood flow to vital organs which can result in high blood pressure or stroke.
Cholesterol is not always bad. It's a vital part of the cell wall and a precursor to substances such as brain matter and some sex hormones. There are some types of cholesterol which are beneficial to the heart and blood vessels. High-density lipoprotein is commonly called "good" cholesterol. These lipoproteins help in the removal of cholesterol from the cells, which is then transported back to the liver where it is disintegrated and excreted as waste or broken down into parts.
Cholesterol content of various foods
See also
Nutrition
Plant stanol ester
Fatty acid
Document 2:::
Jennifer Eileen Van Eyk is the Erika Glazer Chair in Women's Heart Health, the Director of Advanced Clinical Biosystems Institute in the Department of Biomedical Sciences, the Director of Basic Science Research in the Women's Heart Center, a Professor in Medicine and in Biomedical Sciences at Cedars-Sinai. She is a renowned scientist in the field of clinical proteomics.
Early life and education
Jennifer E. Van Eyk was born in Northern Ontario, Canada. She obtained a bachelor of science in biology and chemistry from the University of Waterloo in 1982. She received a PhD in biochemistry under the direction of Robert S. Hodges from University of Alberta in 1991. She conducted post-doctoral research at University of Heidelberg, University of Alberta, and University of Illinois at Chicago with R. John Solaro.
Career
Van Eyk began her academic career in 1996 as an assistant professor in the Department of Physiology at Queen's University, Kingston, Canada, and she was promoted to associate professor and received tenure in 2001. She then left Canada to join Johns Hopkins University as the Director of the Proteomics Innovation Center in Heart Failure in 2003, and later Cedars-Sinai in 2014.
Van Eyk is a member-at-large and a council member of Human Proteome Organization, and the president of US Human Proteome Organization. She was a technical briefs editor at Proteomics. She served on the editorial board of Proteomics: clinical application and Journal of Physiology and Circulation Research. She currently serves on the editorial board of Clinical Proteomics. She is a Fellow of the International Society for Heart Research. and is a Fellow of the American Heart Association.
Research
She is an international leading scientist in clinical proteomics. She is the founding director of Cedars-Sinai Advanced Clinical Biosystems Research Institute, whose motto is “from discovery to patient care”.
She is co-editor of Clinical Proteomics: From Diagnosis to Therapy, an essential,
Document 3:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 4:::
The Robertson Centre for Biostatistics is a specialised biostatistical research centre in Glasgow, Scotland. It is part of the College of Medical, Veterinary and Life Sciences and the Institute of Health and Wellbeing at the University of Glasgow. All scales of research are carried out at the centre from multi-site clinical trials to small scale research projects. The centre also has interests in the development of novel informatics solutions for clinical research, statistical issues in epidemiology and health economic evaluation.
History
The centre led the WOSCOP study (New England Journal of Medicine 1995; 333:1301-7) which found that treatment with Pravastatin significantly reduced the risk of myocardial infarction and the risk of death from cardiovascular causes without adversely affecting the risk of death from noncardiovascular causes in men with moderate hypercholesterolaemia and no history of myocardial infarction.
The Robertson Centre joined with the Glasgow Clinical Research Facility and Greater Glasgow and Clyde NHS R&D division in November 2007 to create a UKCRN registered Clinical Trials Unit - the Glasgow Clinical Trials Unit.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Patients with familial hypercholesterolemia have life-threatening levels of cholesterol because their cells cannot clear what particles from their blood?
A. oxygen (O)
B. low-density lipoprotein (ldl)
C. iron (Fe)
D. high - density lipoprotein (hdl)
Answer:
|
|
sciq-6333
|
multiple_choice
|
Some membrane proteins that actively transport ions contribute to what?
|
[
"organism potential",
"membrane potential",
"cellular potential",
"protein potential"
] |
B
|
Relavent Documents:
Document 0:::
Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane.
Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct conformation of the protein in isolation from its native environment.
Function
Membrane proteins perform a variety of functions vital to the survival of organisms:
Membrane receptor proteins relay signals between the cell's internal and external environments.
Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database.
Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase.
Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response
The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences.
Integral membrane proteins
Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer:
Integral polytopic proteins are transmembran
Document 1:::
In cellular biology, membrane transport refers to the collection of mechanisms that regulate the passage of solutes such as ions and small molecules through biological membranes, which are lipid bilayers that contain proteins embedded in them. The regulation of passage through the membrane is due to selective membrane permeability – a characteristic of biological membranes which allows them to separate substances of distinct chemical nature. In other words, they can be permeable to certain substances but not to others.
The movements of most solutes through the membrane are mediated by membrane transport proteins which are specialized to varying degrees in the transport of specific molecules. As the diversity and physiology of the distinct cells is highly related to their capacities to attract different external elements, it is postulated that there is a group of specific transport proteins for each cell type and for every specific physiological stage. This differential expression is regulated through the differential transcription of the genes coding for these proteins and its translation, for instance, through genetic-molecular mechanisms, but also at the cell biology level: the production of these proteins can be activated by cellular signaling pathways, at the biochemical level, or even by being situated in cytoplasmic vesicles. The cell membrane regulates the transport of materials entering and exiting the cell.
Background
Thermodynamically the flow of substances from one compartment to another can occur in the direction of a concentration or electrochemical gradient or against it. If the exchange of substances occurs in the direction of the gradient, that is, in the direction of decreasing potential, there is no requirement for an input of energy from outside the system; if, however, the transport is against the gradient, it will require the input of energy, metabolic energy in this case.
For example, a classic chemical mechanism for separation that does
Document 2:::
Membrane potential (also transmembrane potential or membrane voltage) is the difference in electric potential between the interior and the exterior of a biological cell. That is, there is a difference in the energy required for electric charges to move from the internal to exterior cellular environments and vice versa, as long as there is no acquisition of kinetic energy or the production of radiation. The concentration gradients of the charges directly determine this energy requirement. For the exterior of the cell, typical values of membrane potential, normally given in units of milli volts and denoted as mV, range from –80 mV to –40 mV.
All animal cells are surrounded by a membrane composed of a lipid bilayer with proteins embedded in it. The membrane serves as both an insulator and a diffusion barrier to the movement of ions. Transmembrane proteins, also known as ion transporter or ion pump proteins, actively push ions across the membrane and establish concentration gradients across the membrane, and ion channels allow ions to move across the membrane down those concentration gradients. Ion pumps and ion channels are electrically equivalent to a set of batteries and resistors inserted in the membrane, and therefore create a voltage between the two sides of the membrane.
Almost all plasma membranes have an electrical potential across them, with the inside usually negative with respect to the outside. The membrane potential has two basic functions. First, it allows a cell to function as a battery, providing power to operate a variety of "molecular devices" embedded in the membrane. Second, in electrically excitable cells such as neurons and muscle cells, it is used for transmitting signals between different parts of a cell. Signals are generated by opening or closing of ion channels at one point in the membrane, producing a local change in the membrane potential. This change in the electric field can be quickly sensed by either adjacent or more distant ion chann
Document 3:::
The Society of General Physiologists (SGP) is a scientific organization whose purpose is to promote and disseminate knowledge in the field of general physiology, and otherwise to advance understanding and interest in the subject of general physiology. The Society’s main office is located at the Marine Biological Laboratory in Woods Hole, MA, where the society was founded in 1946. Past Presidents of the Society include Richard W. Aldrich, Richard W. Tsien, Clay Armstrong, and Andrew Szent-Gyorgi. The society's archives is held at the National Library of Medicine in Bethesda, Maryland.
Membership
The Society's international membership is made up of nearly 600 career physiologists who work in academia, government, and industry. Membership in the Society is open to any individual actively interested in the field of general physiology and who has made significant contributions to knowledge in that field. The Society has become known for promoting research in many subfields of cellular and molecular physiology, but especially in the fields of membrane transport and ion channels, cell membrane structure, regulation, and dynamics, and cellular contractility and molecular motors.
Activities
The major activity of the Society is its annual symposium, which is held at the Marine Biological Laboratory in Woods Hole, MA. Society of General Physiologists symposia cover the forefront of physiological research and are small enough to maximize discussion and interaction among both young and established investigators. Abstracts of the annual meeting are published in The Journal of General Physiology.
The 2015 symposium (September 16–20) topic is "Macromolecular Local Signaling Complexes." Detailed information regarding the scientific agenda and registration is provided at the symposium website:
https://web.archive.org/web/20150801070408/http://www.sgpweb.org/symposium2015.html
Recent past symposium topics include:
2014 Sensory Transduction
2013 The Enigmatic Chloride Ion: Tra
Document 4:::
Large conductance mechanosensitive ion channels (MscLs) (TC# 1.A.22) are a family of pore-forming membrane proteins that are responsible for translating stresses at the cell membrane into an electrophysiological response. MscL has a relatively large conductance, 3 nS, making it permeable to ions, water, and small proteins when opened. MscL acts as stretch-activated osmotic release valve in response to osmotic shock.
History
MscL was first discovered on the surface of giant Escherichia coli spheroplasts using patch-clamp technique. Subsequently, the Escherichia coli MscL (Ec-MscL) gene was cloned in 1994. Following the cloning of MscL, the crystal structure of Mycobacterium tuberculosis MscL (Tb-MscL), was obtained in its closed conformation. In addition, the crystal structure of Staphylococcus aureus MscL (Sa-MscL) and Ec-MscL have been determined using X-ray crystallography and molecular model respectively. However, some evidence suggests that the Sa-MscL structure is not physiological, and is due to the detergent used in crystallization.
Structure
Similar to other ion channels, MscLs are organized as symmetric oligomers with the permeation pathway formed by the packing of subunits around the axis of rotational symmetry. Unlike MscS, which is heptameric, MscL is likely pentameric; although the Sa-MscL appears to be a tetramer in a crystal structure, this may be an artifact. MscL contains two transmembrane helices that are packed in an up-down/nearest neighbor topology. The permeation pathway of the MscL is approximately funnel shaped, with larger opening facing the periplasmic surface of the membrane and the narrowest point near the cytoplasm. At the narrowest point, the pore is constricted by the side chains of symmetry-related residues in Ec-MscL: Leu19 and Val23. The pore diameter of MscL in the open state has been estimated to ~3 nm, which accommodates the passage of small protein up to 9 kD.
Ec-MscL consists of five identical subunits, each 136 amino acids
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Some membrane proteins that actively transport ions contribute to what?
A. organism potential
B. membrane potential
C. cellular potential
D. protein potential
Answer:
|
|
sciq-4591
|
multiple_choice
|
What science is the study of the shape and arrangement of cells in tissue?
|
[
"histology",
"genetics",
"cellology",
"methodology"
] |
A
|
Relavent Documents:
Document 0:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 1:::
Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine.
Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω té
Document 2:::
Medical biology is a field of biology that has practical applications in medicine, health care and laboratory diagnostics. It includes many biomedical disciplines and areas of specialty that typically contains the "bio-" prefix such as:
molecular biology, biochemistry, biophysics, biotechnology, cell biology, embryology,
nanobiotechnology, biological engineering, laboratory medical biology,
cytogenetics, genetics, gene therapy,
bioinformatics, biostatistics, systems biology,
microbiology, virology, parasitology,
physiology, pathology,
toxicology, and many others that generally concern life sciences as applied to medicine.
Medical biology is the cornerstone of modern health care and laboratory diagnostics. It concerned a wide range of scientific and technological approaches: from an in vitro diagnostics to the in vitro fertilisation, from the molecular mechanisms of a cystic fibrosis to the population dynamics of the HIV, from the understanding molecular interactions to the study of the carcinogenesis, from a single-nucleotide polymorphism (SNP) to the gene therapy.
Medical biology based on molecular biology combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
See also
External links
Document 3:::
Molecular anatomy is the subspecialty of microscopic anatomy concerned with the identification and description of molecular structures of cells, tissues, and organs in an organism.
Document 4:::
The following outline is provided as an overview of and topical guide to biophysics:
Biophysics – interdisciplinary science that uses the methods of physics to study biological systems.
Nature of biophysics
Biophysics is
An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.
A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force.
An interdisciplinary field – field of science that overlaps with other sciences
Scope of biophysics research
Biomolecular scale
Biomolecule
Biomolecular structure
Organismal scale
Animal locomotion
Biomechanics
Biomineralization
Motility
Environmental scale
Biophysical environment
Biophysics research overlaps with
Agrophysics
Biochemistry
Biophysical chemistry
Bioengineering
Biogeophysics
Nanotechnology
Systems biology
Branches of biophysics
Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general.
Medical biophysics – interdisciplinary field that applies me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What science is the study of the shape and arrangement of cells in tissue?
A. histology
B. genetics
C. cellology
D. methodology
Answer:
|
|
sciq-5925
|
multiple_choice
|
At the bottom of lakes and ponds, bacteria in what zone break down dead organisms that sink there?
|
[
"photic",
"photoreactive zone",
"aphotic zone",
"trophic"
] |
C
|
Relavent Documents:
Document 0:::
"Resurrection ecology" is an evolutionary biology technique whereby researchers hatch dormant eggs from lake sediments to study animals as they existed decades ago. It is a new approach that might allow scientists to observe evolution as it occurred, by comparing the animal forms hatched from older eggs with their extant descendants. This technique is particularly important because the live organisms hatched from egg banks can be used to learn about the evolution of behavioural, plastic or competitive traits that are not apparent from more traditional paleontological methods.
One such researcher in the field is W. Charles Kerfoot of Michigan Technological University whose results were published in the journal Limnology and Oceanography. He reported on success in a search for "resting eggs" of zooplankton that are dormant in Portage Lake on Michigan's Upper Peninsula. The lake has undergone a considerable amount of change over the last 100 years including flooding by copper mine debris, dredging, and eutrophication. Others have used this technique to explore the evolutionary effects of eutrophication, predation, and metal contamination. Resurrection ecology provided the best empirical example of the "Red Queen Hypothesis" in nature. Any organism that produces a resting stage can be used for resurrection ecology. However, the most frequently used organism is the water flea, Daphnia. This genus has well-established protocols for lab experimentation and usually asexually reproduces allowing for experiments on many individuals with the same genotype.
Although the more esoteric demonstration of natural selection is alone a valuable aspect of the study described, there is a clear ecological implication in the discovery that very old zooplankton eggs have survived in the lake: the potential still exists, if and when this environment is restored to something of a more pristine nature, for at least some of the original (pre-disturbance) inhabitants to re-establish populatio
Document 1:::
Dead zones are hypoxic (low-oxygen) areas in the world's oceans and large lakes. Hypoxia occurs when dissolved oxygen (DO) concentration falls to or below 2 mg of O2/liter. When a body of water experiences hypoxic conditions, aquatic flora and fauna begin to change behavior in order to reach sections of water with higher oxygen levels. Once DO declines below 0.5 ml O2/liter in a body of water, mass mortality occurs. With such a low concentration of DO, these bodies of water fail to support the aquatic life living there. Historically, many of these sites were naturally occurring. However, in the 1970s, oceanographers began noting increased instances and expanses of dead zones. These occur near inhabited coastlines, where aquatic life is most concentrated.
Coastal regions, such as the Baltic Sea, the northern Gulf of Mexico, and the Chesapeake Bay, as well as large enclosed water bodies like Lake Erie, have been affected by deoxygenation due to eutrophication. Excess nutrients are input into these systems by rivers, ultimately from urban and agricultural runoff and exacerbated by deforestation. These nutrients lead to high productivity that produces organic material that sinks to the bottom and is respired. The respiration of that organic material uses up the oxygen and causes hypoxia or anoxia.
The UN Environment Programme reported 146 dead zones in 2004 in the world's oceans where marine life could not be supported due to depleted oxygen levels. Some of these were as small as a square kilometer (0.4 mi2), but the largest dead zone covered 70,000 square kilometers (27,000 mi2). A 2008 study counted 405 dead zones worldwide.
Causes
Aquatic and marine dead zones can be caused by an increase in nutrients (particularly nitrogen and phosphorus) in the water, known as eutrophication. These nutrients are the fundamental building blocks of single-celled, plant-like organisms that live in the water column, and whose growth is limited in part by the availability of these
Document 2:::
Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters).
One of the significant areas of current research is eutrophication. Special attention is paid to biotic interactions in plankton assemblage including the microbial loop, the mechanism of influencing algal blooms, phosphorus load, and lake turnover. Another subject of research is the acidification of mountain lakes. Long-term studies are carried out on changes in the ionic composition of the water of rivers, lakes and reservoirs in connection with acid rain and fertilization. One goal of current research is elucidation of the basic environmental functions of the ecosystem in reservoirs, which are important for water quality management and water supply.
Much of the early work of hydrobiologists concentrated on the biological processes utilized in sewage treatment and water purification especially slow sand filters. Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive.
A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them. They also identify pollutants and nuisances that can affect the aquatic fauna and flora. They take the samples and write reports of their observations for publications.
A hydrobiologist engineer intervenes more in the process of the study. They define the inte
Document 3:::
Subsurface lithoautotrophic microbial ecosystems, or "SLIMEs" (also abbreviated "SLMEs" or "SLiMEs"), are a type of endolithic ecosystems. They are defined by Edward O. Wilson as "unique assemblages of bacteria and fungi that occupy pores in the interlocking mineral grains of igneous rock beneath Earth's surface."
Endolithic systems are still at an early stage of exploration. In some cases its biota can support simple invertebrates, most organisms are unicellular. Near-surface layers of rock may contain blue-green algae but most energy comes from chemical synthesis of minerals. The limited supply of energy limits the rates of growth and reproduction. In deeper rock layers microbes are exposed to high pressures and temperatures.
Document 4:::
Marine prokaryotes are marine bacteria and marine archaea. They are defined by their habitat as prokaryotes that live in marine environments, that is, in the saltwater of seas or oceans or the brackish water of coastal estuaries. All cellular life forms can be divided into prokaryotes and eukaryotes. Eukaryotes are organisms whose cells have a nucleus enclosed within membranes, whereas prokaryotes are the organisms that do not have a nucleus enclosed within a membrane. The three-domain system of classifying life adds another division: the prokaryotes are divided into two domains of life, the microscopic bacteria and the microscopic archaea, while everything else, the eukaryotes, become the third domain.
Prokaryotes play important roles in ecosystems as decomposers recycling nutrients. Some prokaryotes are pathogenic, causing disease and even death in plants and animals. Marine prokaryotes are responsible for significant levels of the photosynthesis that occurs in the ocean, as well as significant cycling of carbon and other nutrients.
Prokaryotes live throughout the biosphere. In 2018 it was estimated the total biomass of all prokaryotes on the planet was equivalent to 77 billion tonnes of carbon (77 Gt C). This is made up of 7 Gt C for archaea and 70 Gt C for bacteria. These figures can be contrasted with the estimate for the total biomass for animals on the planet, which is about 2 Gt C, and the total biomass of humans, which is 0.06 Gt C. This means archaea collectively have over 100 times the collective biomass of humans, and bacteria over 1000 times.
There is no clear evidence of life on Earth during the first 600 million years of its existence. When life did arrive, it was dominated for 3,200 million years by the marine prokaryotes. More complex life, in the form of crown eukaryotes, didn't appear until the Cambrian explosion a mere 500 million years ago.
Evolution
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Eart
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
At the bottom of lakes and ponds, bacteria in what zone break down dead organisms that sink there?
A. photic
B. photoreactive zone
C. aphotic zone
D. trophic
Answer:
|
|
sciq-4469
|
multiple_choice
|
Featuring a stalk-like filament that ends in an anther, what is the male reproductive organ in a flower?
|
[
"stamen",
"angiosperms",
"petals",
"cones"
] |
A
|
Relavent Documents:
Document 0:::
Gynoecium (; ; : gynoecia) is most commonly used as a collective term for the parts of a flower that produce ovules and ultimately develop into the fruit and seeds. The gynoecium is the innermost whorl of a flower; it consists of (one or more) pistils and is typically surrounded by the pollen-producing reproductive organs, the stamens, collectively called the androecium. The gynoecium is often referred to as the "female" portion of the flower, although rather than directly producing female gametes (i.e. egg cells), the gynoecium produces megaspores, each of which develops into a female gametophyte which then produces egg cells.
The term gynoecium is also used by botanists to refer to a cluster of archegonia and any associated modified leaves or stems present on a gametophyte shoot in mosses, liverworts, and hornworts. The corresponding terms for the male parts of those plants are clusters of antheridia within the androecium. Flowers that bear a gynoecium but no stamens are called pistillate or carpellate. Flowers lacking a gynoecium are called staminate.
The gynoecium is often referred to as female because it gives rise to female (egg-producing) gametophytes; however, strictly speaking sporophytes do not have a sex, only gametophytes do. Gynoecium development and arrangement is important in systematic research and identification of angiosperms, but can be the most challenging of the floral parts to interpret.
Introduction
Unlike most animals, plants grow new organs after embryogenesis, including new roots, leaves, and flowers. In the flowering plants, the gynoecium develops in the central region of the flower as a carpel or in groups of fused carpels. After fertilization, the gynoecium develops into a fruit that provides protection and nutrition for the developing seeds, and often aids in their dispersal. The gynoecium has several specialized tissues. The tissues of the gynoecium develop from genetic and hormonal interactions along three-major axes. These tissue
Document 1:::
In botany, floral morphology is the study of the diversity of forms and structures presented by the flower, which, by definition, is a branch of limited growth that bears the modified leaves responsible for reproduction and protection of the gametes, called floral pieces.
Fertile leaves or sporophylls carry sporangiums, which will produce male and female gametes and therefore are responsible for producing the next generation of plants. The sterile leaves are modified leaves whose function is to protect the fertile parts or to attract pollinators. The branch of the flower that joins the floral parts to the stem is a shaft called the pedicel, which normally dilates at the top to form the receptacle in which the various floral parts are inserted.
All spermatophytes ("seed plants") possess flowers as defined here (in a broad sense), but the internal organization of the flower is very different in the two main groups of spermatophytes: living gymnosperms and angiosperms. Gymnosperms may possess flowers that are gathered in strobili, or the flower itself may be a strobilus of fertile leaves. Instead a typical angiosperm flower possesses verticils or ordered whorls that, from the outside in, are composed first of sterile parts, commonly called sepals (if their main function is protective) and petals (if their main function is to attract pollinators), and then the fertile parts, with reproductive function, which are composed of verticils or whorls of stamens (which carry the male gametes) and finally carpels (which enclose the female gametes).
The arrangement of the floral parts on the axis, the presence or absence of one or more floral parts, the size, the pigmentation and the relative arrangement of the floral parts are responsible for the existence of a great variety of flower types. Such diversity is particularly important in phylogenetic and taxonomic studies of angiosperms. The evolutionary interpretation of the different flower types takes into account aspects of
Document 2:::
The stamen (: stamina or stamens) is the pollen-producing reproductive organ of a flower. Collectively the stamens form the androecium.
Morphology and terminology
A stamen typically consists of a stalk called the filament and an anther which contains microsporangia. Most commonly anthers are two-lobed (each lobe is termed a locule) and are attached to the filament either at the base or in the middle area of the anther. The sterile tissue between the lobes is called the connective, an extension of the filament containing conducting strands. It can be seen as an extension on the dorsal side of the anther. A pollen grain develops from a microspore in the microsporangium and contains the male gametophyte. The size of anthers differs greatly, from a tiny fraction of a millimeter in Wolfia spp up to five inches (13 centimeters) in Canna iridiflora and Strelitzia nicolai.
The stamens in a flower are collectively called the androecium. The androecium can consist of as few as one-half stamen (i.e. a single locule) as in Canna species or as many as 3,482 stamens which have been counted in the saguaro (Carnegiea gigantea). The androecium in various species of plants forms a great variety of patterns, some of them highly complex. It generally surrounds the gynoecium and is surrounded by the perianth. A few members of the family Triuridaceae, particularly Lacandonia schismatica and Lacandonia braziliana, along with a few species of Trithuria (family Hydatellaceae) are exceptional in that their gynoecia surround their androecia.
Etymology
Stamen is the Latin word meaning "thread" (originally thread of the warp, in weaving).
Filament derives from classical Latin filum, meaning "thread"
Anther derives from French anthère, from classical Latin anthera, meaning "medicine extracted from the flower" in turn from Ancient Greek ἀνθηρά (), feminine of ἀνθηρός () meaning "flowery", from ἄνθος () meaning "flower"
Androecium (: androecia) derives from Ancient Greek ἀνήρ () meanin
Document 3:::
The floral axis (sometimes referred to as the receptacle) is the area of the flower upon which the reproductive organs and other ancillary organs are attached. It is also the point at the center of a floral diagram. Many flowers in division Angiosperma appear on floral axes. The floral axis can differ in form depending on the type of plant. For example, monocotyledons have a weakly developed floral axis compared to dicotyledons, and will therefore rarely possess a floral disc, which is common among dicotyledons.
Floral diagramming
Floral diagramming is a method used to graphically describe a flower. In the context of floral diagramming, the floral axis represents the center point around which the diagram is oriented. The floral axis can also be referred to as the receptacle in floral diagrams or when describing the structure of the flower. The main or mother axis in floral diagrams is not synonymous with the floral axis, rather it refers to where the stem of the flower is in relation to the diagram. The floral axis is also useful for identifying the type of symmetry that a flower exhibits.
Function
The floral axis serves as the attachment point for organs of the flower, such as the reproductive organs (pistil and stamen) and other organs such as the sepals and carpels. The floral axis acts much like a modified stem and births the organs that are attached to it. The fusion of a plant's organs and the amount of organs that are developed from the floral axis largely depends on the determinateness of the floral axis. The floral axis does perform different functions for different types of plants. For instance, with dicotyledons, the floral axis acts as a nectary, while that is not the case with monocotyledons. More specialized functions can also be performed by the floral axis. For example, in the plant Hibiscus, the floral axis is able to proliferate and produce fruit, rendering processes like self pollination unnecessary.
Document 4:::
A gynophore is the stalk of certain flowers which supports the gynoecium (the ovule-producing part of a flower), elevating it above the branching points of other floral parts.
Plant genera that have flowers with gynophores include Telopea, Peritoma arborea and Brachychiton.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Featuring a stalk-like filament that ends in an anther, what is the male reproductive organ in a flower?
A. stamen
B. angiosperms
C. petals
D. cones
Answer:
|
|
sciq-897
|
multiple_choice
|
What is the ability to cause changes in matter?
|
[
"force",
"hydrogen",
"pressure",
"energy"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the ability to cause changes in matter?
A. force
B. hydrogen
C. pressure
D. energy
Answer:
|
|
sciq-3387
|
multiple_choice
|
What pigment is required for photosynthesis to occur?
|
[
"xanthophyll",
"chroma",
"carotene",
"chlorophyll"
] |
D
|
Relavent Documents:
Document 0:::
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779.
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water.
Origin
Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w
Document 1:::
Photosystems are functional and structural units of protein complexes involved in photosynthesis. Together they carry out the primary photochemistry of photosynthesis: the absorption of light and the transfer of energy and electrons. Photosystems are found in the thylakoid membranes of plants, algae, and cyanobacteria. These membranes are located inside the chloroplasts of plants and algae, and in the cytoplasmic membrane of photosynthetic bacteria. There are two kinds of photosystems: PSI and PSII.
PSII will absorb red light, and PSI will absorb far-red light. Although photosynthetic activity will be detected when the photosystems are exposed to either red or far-red light, the photosynthetic activity will be the greatest when plants are exposed to both wavelengths of light. Studies have actually demonstrated that the two wavelengths together have a synergistic effect on the photosynthetic activity, rather than an additive one.
Each photosystem has two parts: a reaction center, where the photochemistry occurs, and an antenna complex, which surrounds the reaction center. The antenna complex contains hundreds of chlorophyll molecules which funnel the excitation energy to the center of the photosystem. At the reaction center, the energy will be trapped and transferred to produce a high energy molecule.
The main function of PSII is to efficiently split water into oxygen molecules and protons. PSII will provide a steady stream of electrons to PSI, which will boost these in energy and transfer them to NADP and H to make NADPH. The hydrogen from this NADPH can then be used in a number of different processes within the plant.
Reaction centers
Reaction centers are multi-protein complexes found within the thylakoid membrane.
At the heart of a photosystem lies the reaction center, which is an enzyme that uses light to reduce and oxidize molecules (give off and take up electrons). This reaction center is surrounded by light-harvesting complexes that enhance the absorptio
Document 2:::
The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction
6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2
where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence.
Typical efficiencies
Plants
Quoted values sunlight-to-biomass efficien
Document 3:::
Photosynthesis
Oxygenic photosynthesis uses two multi-subunit photosystems (I and II) located in the cell membranes of cyanobacteria and in the thylakoid membranes of chloroplasts in plants and algae. Photosystem II (PSII) has a P680 reaction centre containing chlorophyll 'a' that uses light energy to carr
Document 4:::
Carotenoids () are yellow, orange, and red organic pigments that are produced by plants and algae, as well as several bacteria, archaea, and fungi. Carotenoids give the characteristic color to pumpkins, carrots, parsnips, corn, tomatoes, canaries, flamingos, salmon, lobster, shrimp, and daffodils. Over 1,100 identified carotenoids can be further categorized into two classes xanthophylls (which contain oxygen) and carotenes (which are purely hydrocarbons and contain no oxygen).
All are derivatives of tetraterpenes, meaning that they are produced from 8 isoprene units and contain 40 carbon atoms. In general, carotenoids absorb wavelengths ranging from 400 to 550 nanometers (violet to green light). This causes the compounds to be deeply colored yellow, orange, or red. Carotenoids are the dominant pigment in autumn leaf coloration of about 15-30% of tree species, but many plant colors, especially reds and purples, are due to polyphenols.
Carotenoids serve two key roles in plants and algae: they absorb light energy for use in photosynthesis, and they provide photoprotection via non-photochemical quenching. Carotenoids that contain unsubstituted beta-ionone rings (including β-carotene, α-carotene, β-cryptoxanthin, and γ-carotene) have vitamin A activity (meaning that they can be converted to retinol). In the eye, lutein, meso-zeaxanthin, and zeaxanthin are present as macular pigments whose importance in visual function, as of 2016, remains under clinical research.
Structure and function
Carotenoids are produced by all photosynthetic organisms and are primarily used as accessory pigments to chlorophyll in the light-harvesting part of photosynthesis.
They are highly unsaturated with conjugated double bonds, which enables carotenoids to absorb light of various wavelengths. At the same time, the terminal groups regulate the polarity and properties within lipid membranes.
Most carotenoids are tetraterpenoids, regular C40 isoprenoids. Several modifications to these
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What pigment is required for photosynthesis to occur?
A. xanthophyll
B. chroma
C. carotene
D. chlorophyll
Answer:
|
|
sciq-1926
|
multiple_choice
|
Where does the most important monsoon in the world occur?
|
[
"southern asia",
"eastern aisa",
"northern africa",
"the atlantic ocean"
] |
A
|
Relavent Documents:
Document 0:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 1:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does the most important monsoon in the world occur?
A. southern asia
B. eastern aisa
C. northern africa
D. the atlantic ocean
Answer:
|
|
sciq-10493
|
multiple_choice
|
What are formed when crystals precipitate out from a liquid?
|
[
"gaseous sedimentary rocks",
"chemical sedimentary rocks",
"additive sedimentary rocks",
"diamonds"
] |
B
|
Relavent Documents:
Document 0:::
In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
History
Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Occurrence and examples
Solid precipitate, liquid solvent
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases wit
Document 1:::
A solid solution, a term popularly used for metals, is a homogeneous mixture of two different kinds of atoms in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species.
In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite.
Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation.
Nomenclature
The IUPAC definition of a solid solution is a "solid in which components ar
Document 2:::
In mineralogy, crystal habit is the characteristic external shape of an individual crystal or aggregate of crystals. The habit of a crystal is dependent on its crystallographic form and growth conditions, which generally creates irregularities due to limited space in the crystallizing medium (commonly in rocks).
Crystal forms
Recognizing the habit can aid in mineral identification and description, as the crystal habit is an external representation of the internal ordered atomic arrangement. Most natural crystals, however, do not display ideal habits and are commonly malformed. Hence, it is also important to describe the quality of the shape of a mineral specimen:
Euhedral: a crystal that is completely bounded by its characteristic faces, well-formed. Synonymous terms: idiomorphic, automorphic;
Subhedral: a crystal partially bounded by its characteristic faces and partially by irregular surfaces. Synonymous terms: hypidiomorphic, hypautomorphic;
Anhedral: a crystal that lacks any of its characteristic faces, completely malformed. Synonymous terms: allotriomorphic, xenomorphic.
Altering factors
Factors influencing habit include: a combination of two or more crystal forms; trace impurities present during growth; crystal twinning and growth conditions (i.e., heat, pressure, space); and specific growth tendencies such as growth striations. Minerals belonging to the same crystal system do not necessarily exhibit the same habit. Some habits of a mineral are unique to its variety and locality: For example, while most sapphires form elongate barrel-shaped crystals, those found in Montana form stout tabular crystals. Ordinarily, the latter habit is seen only in ruby. Sapphire and ruby are both varieties of the same mineral: corundum.
Some minerals may replace other existing minerals while preserving the original's habit, i.e. pseudomorphous replacement. A classic example is tiger's eye quartz, crocidolite asbestos replaced by silica. While quartz typically forms prism
Document 3:::
In chemistry, water(s) of crystallization or water(s) of hydration are water molecules that are present inside crystals. Water is often incorporated in the formation of crystals from aqueous solutions. In some contexts, water of crystallization is the total mass of water in a substance at a given temperature and is mostly present in a definite (stoichiometric) ratio. Classically, "water of crystallization" refers to water that is found in the crystalline framework of a metal complex or a salt, which is not directly bonded to the metal cation.
Upon crystallization from water, or water-containing solvents, many compounds incorporate water molecules in their crystalline frameworks. Water of crystallization can generally be removed by heating a sample but the crystalline properties are often lost.
Compared to inorganic salts, proteins crystallize with large amounts of water in the crystal lattice. A water content of 50% is not uncommon for proteins.
Applications
Knowledge of hydration is essential for calculating the masses for many compounds. The reactivity of many salt-like solids is sensitive to the presence of water.
The hydration and dehydration of salts is central to the use of phase-change materials for energy storage.
Position in the crystal structure
A salt with associated water of crystallization is known as a hydrate. The structure of hydrates can be quite elaborate, because of the existence of hydrogen bonds that define polymeric structures.
Historically, the structures of many hydrates were unknown, and the dot in the formula of a hydrate was employed to specify the composition without indicating how the water is bound. Per IUPAC's recommendations, the middle dot is not surrounded by spaces when indicating a chemical adduct. Examples:
– copper(II) sulfate pentahydrate
– cobalt(II) chloride hexahydrate
– tin(II) (or stannous) chloride dihydrate
For many salts, the exact bonding of the water is unimportant because the water molecules are made labi
Document 4:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are formed when crystals precipitate out from a liquid?
A. gaseous sedimentary rocks
B. chemical sedimentary rocks
C. additive sedimentary rocks
D. diamonds
Answer:
|
|
sciq-6256
|
multiple_choice
|
Misconceptions about what theory contribute to the controversy that still surrounds this fundamental principle of biology?
|
[
"cycle of evolution",
"darwin on evolution",
"brain of evolution",
"theory of evolution"
] |
D
|
Relavent Documents:
Document 0:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
Document 1:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 2:::
Recurring cultural, political, and theological rejection of evolution by religious groups exists regarding the origins of the Earth, of humanity, and of other life. In accordance with creationism, species were once widely believed to be fixed products of divine creation, but since the mid-19th century, evolution by natural selection has been established by the scientific community as an empirical scientific fact.
Any such debate is universally considered religious, not scientific, by professional scientific organizations worldwide: in the scientific community, evolution is accepted as fact, and efforts to sustain the traditional view are universally regarded as pseudoscience. While the controversy has a long history, today it has retreated to be mainly over what constitutes good science education, with the politics of creationism primarily focusing on the teaching of creationism in public education. Among majority-Christian countries, the debate is most prominent in the United States, where it may be portrayed as part of a culture war. Parallel controversies also exist in some other religious communities, such as the more fundamentalist branches of Judaism and Islam. In Europe and elsewhere, creationism is less widespread (notably, the Catholic Church and Anglican Communion both accept evolution), and there is much less pressure to teach it as fact.
Christian fundamentalists reject the evidence of common descent of humans and other animals as demonstrated in modern paleontology, genetics, histology and cladistics and those other sub-disciplines which are based upon the conclusions of modern evolutionary biology, geology, cosmology, and other related fields. They argue for the Abrahamic accounts of creation, and, in order to attempt to gain a place alongside evolutionary biology in the science classroom, have developed a rhetorical framework of "creation science". In the landmark Kitzmiller v. Dover, the purported basis of scientific creationism was judged to be a
Document 3:::
The status of creation and evolution in public education has been the subject of substantial debate and conflict in legal, political, and religious circles. Globally, there are a wide variety of views on the topic. Most western countries have legislation that mandates only evolutionary biology is to be taught in the appropriate scientific syllabuses.
Overview
While many Christian denominations do not raise theological objections to the modern evolutionary synthesis as an explanation for the present forms of life on planet Earth, various socially conservative, traditionalist, and fundamentalist religious sects and political groups within Christianity and Islam have objected vehemently to the study and teaching of biological evolution. Some adherents of these Christian and Islamic religious sects or political groups are passionately opposed to the consensus view of the scientific community. Literal interpretations of religious texts are the greatest cause of conflict with evolutionary and cosmological investigations and conclusions.
Internationally, biological evolution is taught in science courses with limited controversy, with the exception of a few areas of the United States and several Muslim-majority countries, primarily Turkey. In the United States, the Supreme Court has ruled the teaching of creationism as science in public schools to be unconstitutional, irrespective of how it may be purveyed in theological or religious instruction. In the United States, intelligent design (ID) has been represented as an alternative explanation to evolution in recent decades, but its "demonstrably religious, cultural, and legal missions" have been ruled unconstitutional by a lower court.
By country
Australia
Although creationist views are popular among religious education teachers and creationist teaching materials have been distributed by volunteers in some schools, many Australian scientists take an aggressive stance supporting the right of teachers to teach the theory
Document 4:::
The Altenberg Workshops in Theoretical Biology are expert meetings focused on a key issue of biological theory, hosted by the Konrad Lorenz Institute for Evolution and Cognition Research (KLI) since 1996. The workshops are organized by leading experts in their field, who invite a group of international top level scientists as participants for a 3-day working meeting in the Lorenz Mansion at Altenberg near Vienna, Austria. By this procedure the KLI intends to generate new conceptual advances and research initiatives in the biosciences, which, due to their explicit interdisciplinary nature, are attractive to a wide variety of scientists from practically all fields of biology and the neighboring disciplines.
Workshops and their topics
Cultural Niche Construction. Organized by Kevin Laland and Mike O´Brien. September 2011
Strategic Interaction in Humans and Other Animals. Organized by Simon Huttegger and Brain Skyrms. September 2011
The Meaning of "Theory" in Biology. Organized by Massimo Pigliucci, Kim Sterelny, and Werner Callebaut. June 2011
Biological and Physical Constraints on the Evolution of Form in Plants and Animals. Organized by Jeffrey H. Schwartz and Bruno Maresca. September 2010
Scaffolding in Evolution, Culture, and Cognition. Organized by Linnda Caporael, James Griesemer, and William Wimsatt. July 2010
Models of Man for Evolutionary Economics. Organized by Werner Callebaut, Christophe Heintz, and Luigi Marengo. September 2009
Human EvoDevo: The Role of Development in Human Evolution. Organized by Philipp Gunz and Philipp Mitteroecker. September 2009
Origins of EvoDevo - A tribute to Pere Alberch. Organized by Gerd B. Müller and Diego Rasskin-Gutman. September 2008
Measuring Biology - Quantitative Methods: Past and Future. Organized by Fred L. Bookstein and Katrin Schäfer. September 2008
Toward an Extended Evolutionary Synthesis Organized by Massimo Pigliucci and Gerd B. Müller. July 2008
Innovation in Cultural Systems - Contributions from Evolutionary A
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Misconceptions about what theory contribute to the controversy that still surrounds this fundamental principle of biology?
A. cycle of evolution
B. darwin on evolution
C. brain of evolution
D. theory of evolution
Answer:
|
|
sciq-6789
|
multiple_choice
|
What force holds planets in their orbits?
|
[
"magnetism",
"gravity",
"centrifuge",
"Big Bang"
] |
B
|
Relavent Documents:
Document 0:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 1:::
This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun.
Star
The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System.
Planets
In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless.
According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System.
Dwarf planets
Dwarf planets are bodies orbiting the Sun that are massive and warm eno
Document 2:::
In physics, the -body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, planets, and visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important -body problem. The -body problem in general relativity is considerably more difficult to solve due to additional factors like time and space distortions.
The classical physical problem can be informally stated as the following:
The two-body problem has been completely solved and is discussed below, as well as the famous restricted three-body problem.
History
Knowing three orbital positions of a planet's orbit – positions obtained by Sir Isaac Newton from astronomer John Flamsteed – Newton was able to produce an equation by straightforward analytical geometry, to predict a planet's motion; i.e., to give its orbital properties: position, orbital diameter, period and orbital velocity. Having done so, he and others soon discovered over the course of a few years, those equations of motion did not predict some orbits correctly or even very well. Newton realized that this was because gravitational interactive forces amongst all the planets were affecting all their orbits.
The aforementioned revelation strikes directly at the core of what the n-body issue physically is: as Newton understood, it is not enough to just provide the beginning location and velocity, or even three orbital positions, in order to establish a planet's actual orbit; one must also be aware of the gravitational interaction forces. Thus came the awareness and rise of the -body "problem" in the early 17th century. These gravitational attractive forces do conform to Newton's laws of motion and to his law of universal gravitation, but the many multiple (-body) interactions have historically made any exact solution intracta
Document 3:::
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
Ancient Greece
Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Hong Kong
High schools
In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE).
Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi
Document 4:::
The stability of the Solar System is a subject of much inquiry in astronomy. Though the planets have been stable when historically observed, and will be in the short term, their weak gravitational effects on one another can add up in unpredictable ways.
For this reason (among others), the Solar System is chaotic in the technical sense of mathematical chaos theory, and even the most precise long-term models for the orbital motion of the Solar System are not valid over more than a few tens of millions of years.
The Solar System is stable in human terms, and far beyond, given that it is unlikely any of the planets will collide with each other or be ejected from the system in the next few billion years, and that Earth's orbit will be relatively stable.
Since Newton's law of gravitation (1687), mathematicians and astronomers (such as Pierre-Simon Laplace, Joseph Louis Lagrange, Carl Friedrich Gauss, Henri Poincaré, Andrey Kolmogorov, Vladimir Arnold, and Jürgen Moser) have searched for evidence for the stability of the planetary motions, and this quest led to many mathematical developments and several successive "proofs" of stability of the Solar System.
Overview and challenges
The orbits of the planets are open to long-term variations. Modeling the Solar System is a case of the n-body problem of physics, which is generally unsolvable except by numerical simulation.
Resonance
An orbital resonance happens when any two periods have a simple numerical ratio. The most fundamental period for an object in the Solar System is its orbital period, and orbital resonances pervade the Solar System. In 1867, the American astronomer Daniel Kirkwood noticed that asteroids in the asteroid belt are not randomly distributed. There were distinct gaps in the belt at locations that corresponded to resonances with Jupiter. For example, there were no asteroids at the 3:1 resonance — a distance of — or at the 2:1 resonance at . These are now known as the Kirkwood gaps. Some asteroids
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What force holds planets in their orbits?
A. magnetism
B. gravity
C. centrifuge
D. Big Bang
Answer:
|
|
sciq-1407
|
multiple_choice
|
The lymphatic system helps return fluid that leaks from the blood vessels back to what system?
|
[
"nervous",
"pulmonary",
"cardiovascular",
"peripheral"
] |
C
|
Relavent Documents:
Document 0:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 1:::
In haemodynamics, the body must respond to physical activities, external temperature, and other factors by homeostatically adjusting its blood flow to deliver nutrients such as oxygen and glucose to stressed tissues and allow them to function. Haemodynamic response (HR) allows the rapid delivery of blood to active neuronal tissues. The brain consumes large amounts of energy but does not have a reservoir of stored energy substrates. Since higher processes in the brain occur almost constantly, cerebral blood flow is essential for the maintenance of neurons, astrocytes, and other cells of the brain. This coupling between neuronal activity and blood flow is also referred to as neurovascular coupling.
Vascular anatomy overview
In order to understand how blood is delivered to cranial tissues, it is important to understand the vascular anatomy of the space itself. Large cerebral arteries in the brain split into smaller arterioles, also known as pial arteries. These consist of endothelial cells and smooth muscle cells, and as these pial arteries further branch and run deeper into the brain, they associate with glial cells, namely astrocytes. The intracerebral arterioles and capillaries are unlike systemic arterioles and capillaries in that they do not readily allow substances to diffuse through them; they are connected by tight junctions in order to form the blood brain barrier (BBB). Endothelial cells, smooth muscle, neurons, astrocytes, and pericytes work together in the brain order to maintain the BBB while still delivering nutrients to tissues and adjusting blood flow in the intracranial space to maintain homeostasis. As they work as a functional neurovascular unit, alterations in their interactions at the cellular level can impair HR in the brain and lead to deviations in normal nervous function.
Mechanisms
Various cell types play a role in HR, including astrocytes, smooth muscle cells, endothelial cells of blood vessels, and pericytes. These cells control whether th
Document 2:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 3:::
The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. Endothelial cells form the barrier between vessels and tissue and control the flow of substances and fluid into and out of a tissue.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between v
Document 4:::
Pulmocutaneous circulation is part of the amphibian circulatory system. It is responsible for directing blood to the skin and lungs. Blood flows from the ventricle into an artery called the conus arteriosus and from there into either the left or right truncus arteriosus. They in turn each split the ventricle's output into the pulmocutaneous circuit and the systemic circuit.
See also
Double circulatory system
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The lymphatic system helps return fluid that leaks from the blood vessels back to what system?
A. nervous
B. pulmonary
C. cardiovascular
D. peripheral
Answer:
|
|
sciq-6540
|
multiple_choice
|
What events, resulting in death of over half of animal species, have occurred on earth at least five times in the past 540 million years?
|
[
"mass migrations",
"spontaneous mutations",
"microevolutions",
"mass extinctions"
] |
D
|
Relavent Documents:
Document 0:::
This article is a list of biological species, subspecies, and evolutionary significant units that are known to have become extinct during the Holocene, the current geologic epoch, ordered by their known or approximate date of disappearance from oldest to most recent.
The Holocene is considered to have started with the Holocene glacial retreat around 11650 years Before Present ( BC). It is characterized by a general trend towards global warming, the expansion of anatomically modern humans (Homo sapiens) to all emerged land masses, the appearance of agriculture and animal husbandry, and a reduction in global biodiversity. The latter, dubbed the sixth mass extinction in Earth history, is largely attributed to increased human population and activity, and may have started already during the preceding Pleistocene epoch with the demise of the Pleistocene megafauna.
The following list is incomplete by necessity, since the majority of extinctions are thought to be undocumented, and for many others there isn't a definitive, widely accepted last, or most recent record. According to the species-area theory, the present rate of extinction may be up to 140,000 species per year.
10th millennium BC
9th millennium BC
8th millennium BC
7th millennium BC
6th millennium BC
5th millennium BC
4th millennium BC
3rd millennium BC
2nd millennium BC
1st millennium BC
1st millennium CE
1st–5th centuries
6th–10th centuries
2nd millennium CE
11th-12th century
13th-14th century
15th-16th century
17th century
18th century
19th century
1800s-1820s
1830s-1840s
1850s-1860s
1870s
1880s
1890s
20th century
1900s
1910s
1920s
1930s
1940s
1950s
1960s
1970s
1980s
1990s
3rd millennium CE
21st century
2000s
2010s
See also
List of extinct animals
Extinction event
Quaternary extinction event
Holocene extinction
Timeline of the evolutionary history of life
Timeline of environmental history
Index of environmental articles
List of environmental issues
Document 1:::
The history of life on Earth is closely associated with environmental change on multiple spatial and temporal scales. Climate change is a long-term change in the average weather patterns that have come to define Earth’s local, regional and global climates. These changes have a broad range of observed effects that are synonymous with the term. Climate change is any significant long term change in the expected pattern, whether due to natural variability or as a result of human activity. Predicting the effects that climate change will have on plant biodiversity can be achieved using various models, however bioclimatic models are most commonly used.
Environmental conditions play a key role in defining the function and geographic distributions of plants, in combination with other factors, thereby modifying patterns of biodiversity. Changes in long term environmental conditions that can be collectively coined climate change are known to have had enormous impacts on current plant diversity patterns; further impacts are expected in the future. It is predicted that climate change will remain one of the major drivers of biodiversity patterns in the future. Climate change is thought to be one of several factors causing the currently ongoing human-triggered mass extinction, which is changing the distribution and abundance of many plants.
Palaeo context
The Earth has experienced a constantly changing climate in the time since plants first evolved. In comparison to the present day, this history has seen Earth as cooler, warmer, drier and wetter, and (carbon dioxide) concentrations have been both higher and lower. These changes have been reflected by constantly shifting vegetation, for example forest communities dominating most areas in interglacial periods, and herbaceous communities dominating during glacial periods. It has been shown through fossil records that past climatic change has been a major driver of the processes of speciation and extinction. The best known example
Document 2:::
Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons.
Overview
Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it.
Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years.
Measurement
Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct.
Lifespan estimates
Some species lifespan es
Document 3:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 4:::
Extinction is the termination of a taxon by the death of its last member. A taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence.
More than 99% of all species that ever lived on Earth, amounting to over five billion species, are estimated to have died out. It is estimated that there are currently around 8.7 million species of eukaryote globally, and possibly many times more if microorganisms, like bacteria, are included. Notable extinct animal species include non-avian dinosaurs, saber-toothed cats, dodos, mammoths, ground sloths, thylacines, trilobites, and golden toads.
Through evolution, species arise through the process of speciation—where new varieties of organisms arise and thrive when they are able to find and exploit an ecological niche—and species become extinct when they are no longer able to survive in changing conditions or against superior competition. The relationship between animals and their ecological niches has been firmly established. A typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years.
Mass extinctions are relatively rare events; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. Only recently have extinctions been recorded and scientists have become alarmed at the current high rate of extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing plant and animal
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What events, resulting in death of over half of animal species, have occurred on earth at least five times in the past 540 million years?
A. mass migrations
B. spontaneous mutations
C. microevolutions
D. mass extinctions
Answer:
|
|
sciq-7440
|
multiple_choice
|
Any unused energy in food, whether it comes from carbohydrates, proteins, or lipids, is stored in the body where?
|
[
"spleen",
"kidneys",
"bones",
"fat"
] |
D
|
Relavent Documents:
Document 0:::
Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients.
Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories.
Macronutrients
The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications.
Carbohydrates
Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation.
Complex carbohydrates, especially those with high d
Document 1:::
An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds.
Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as:
P = C - R - U - F or
P = C - (R + U + F) or
C = P + R + U + F
All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ).
Energy used for metabolism will be
R = C - (F + U + P)
Energy used in the maintenance will be
R + F + U = C - P
Endothermy and ectothermy
Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms.
Document 2:::
Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management.
Constituents of diet
Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt
Document 3:::
In biology, energy homeostasis, or the homeostatic control of energy balance, is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow). The human brain, particularly the hypothalamus, plays a central role in regulating energy homeostasis and generating the sense of hunger by integrating a number of biochemical signals that transmit information about energy balance. Fifty percent of the energy from glucose metabolism is immediately converted to heat.
Energy homeostasis is an important aspect of bioenergetics.
Definition
In the US, biological energy is expressed using the energy unit Calorie with a capital C (i.e. a kilocalorie), which equals the energy needed to increase the temperature of 1 kilogram of water by 1 °C (about 4.18 kJ).
Energy balance, through biosynthetic reactions, can be measured with the following equation:
Energy intake (from food and fluids) = Energy expended (through work and heat generated) + Change in stored energy (body fat and glycogen storage)
The first law of thermodynamics states that energy can be neither created nor destroyed. But energy can be converted from one form of energy to another. So, when a calorie of food energy is consumed, one of three particular effects occur within the body: a portion of that calorie may be stored as body fat, triglycerides, or glycogen, transferred to cells and converted to chemical energy in the form of adenosine triphosphate (ATP – a coenzyme) or related compounds, or dissipated as heat.
Energy
Intake
Energy intake is measured by the amount of calories consumed from food and fluids. Energy intake is modulated by hunger, which is primarily regulated by the hypothalamus, and choice, which is determined by the sets of brain structures that are responsible for stimulus control (i.e., operant conditioning and classical conditioning) and cognitive control of eating behavior. Hunger is regulated in part by the act
Document 4:::
Energy expenditure, often estimated as the total daily energy expenditure (TDEE), is the amount of energy burned by the human body.
Causes of energy expenditure
Resting metabolic rate
Resting metabolic rate generally composes 60 to 75 percent of TDEE. Because adipose tissue does not use much energy to maintain, fat free mass is a better predictor of metabolic rate. A taller person will typically have less fat mass than a shorter person at the same weight and therefore burn more energy. Men also carry more skeletal muscle tissue on average than women, and other sex differences in organ size account for sex differences in metabolic rate. Obese individuals burn more energy than lean individuals due to increase in the amount of calories needed to maintain adipose tissue and other organs that grow in size in response to obesity. At rest, the largest fractions of energy are burned by the skeletal muscles, brain, and liver; around 20 percent each. Increasing skeletal muscle tissue can increase metabolic rate.
Activity
Energy burned during physical activity includes the thermic effect of physical activity (TEPA) and non-exercise activity thermogenesis (NEAT).
Thermic effect of food
Thermic effect of food is the amount of energy burned digesting food, around 10 percent of TDEE. Proteins are the component of food requiring the most energy to digest.
Changing energy expenditure
Weight change
Losing or gaining weight affects the energy expenditure. Reduced energy expenditure after weight loss can be a major challenge for people seeking to avoid weight regain after weight loss. It is controversial whether losing weight causes a decrease in energy expenditure greater than expected by the loss of adipose tissue and fat-free mass during weight loss. This excess reduction is termed adaptive thermogenesis and it is estimated that it might compose 50 to 100 kcal/day in people actively losing weight. Some studies have reported that it disappears after a short period of weight
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Any unused energy in food, whether it comes from carbohydrates, proteins, or lipids, is stored in the body where?
A. spleen
B. kidneys
C. bones
D. fat
Answer:
|
|
sciq-514
|
multiple_choice
|
What waves are the broad range of electromagnetic waves with the longest wavelengths and lowest frequencies?
|
[
"radio",
"light",
"sound",
"microwaves"
] |
A
|
Relavent Documents:
Document 0:::
Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the ITU-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 µm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either.
At some frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air is attenuated to zero within a few meters, so it is not practical for terrestrial radio communication at such frequencies. However, there are frequency windows in Earth's atmosphere, where the terahertz radiation could propagate up to 1 km or even longer depending on atmospheric conditions. The most important is the 0.3 THz band that will be used for 6G communications. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects.
Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the “terahertz gap”; it is called a “gap” because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be pos
Document 1:::
IEEE Transactions on Microwave Theory and Techniques (T-MTT) is a monthly peer-reviewed scientific journal with a focus on that part of engineering and theory associated with microwave/millimeter-wave technology and components, electronic devices, guided wave structures and theory, electromagnetic theory, and Radio Frequency Hybrid and Monolithic Integrated Circuits, including mixed-signal circuits, from a few MHz to THz.
T-MTT is published by the IEEE Microwave Theory and Techniques Society. T-MTT was established in 1953 as the Transactions of the IRE Professional Group on Microwave Theory and Techniques. From 1955 T-MTT was published as the IRE Transactions on Microwave Theory and Techniques and was finally the current denomination since 1963.
The editors-in-chief is Jianguo Ma (Guangdong University of Technology). According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.599.
Document 2:::
Microwave transmission is the transmission of information by electromagnetic waves with wavelengths in the microwave frequency range of 300 MHz to 300 GHz (1 m - 1 mm wavelength) of the electromagnetic spectrum. Microwave signals are normally limited to the line of sight, so long-distance transmission using these signals requires a series of repeaters forming a microwave relay network. It is possible to use microwave signals in over-the-horizon communications using tropospheric scatter, but such systems are expensive and generally used only in specialist roles.
Although an experimental microwave telecommunication link across the English Channel was demonstrated in 1931, the development of radar in World War II provided the technology for practical exploitation of microwave communication. During the war, the British Army introduced the Wireless Set No. 10, which used microwave relays to multiplex eight telephone channels over long distances. A link across the English Channel allowed General Bernard Montgomery to remain in continual contact with his group headquarters in London.
In the post-war era, the development of microwave technology was rapid, which led to the construction of several transcontinental microwave relay systems in North America and Europe. In addition to carrying thousands of telephone calls at a time, these networks were also used to send television signals for cross-country broadcast, and later, computer data. Communication satellites took over the television broadcast market during the 1970s and 80s, and the introduction of long-distance fibre optic systems in the 1980s and especially 90s led to the rapid rundown of the relay networks, most of which are abandoned.
In recent years, there has been an explosive increase in use of the microwave spectrum by new telecommunication technologies such as wireless networks, and direct-broadcast satellites which broadcast television and radio directly into consumers' homes. Larger line-of-sight links are
Document 3:::
IEEE Transactions on Terahertz Science and Technology is a bimonthly peer-reviewed scientific journal covering terahertz science, technology, instruments, and applications – "Expanding the use of the Electromagnetic Spectrum." The editor-in-chief is Imran Mehdi, Jet Propulsion Laboratory.
See also
IEEE Transactions on Microwave Theory and Techniques
IEEE Microwave Theory and Wireless Components Letters
IEEE Microwave Magazine
IEEE Microwave Theory and Techniques Society
External links
Transactions on Terahertz Science and Technology
Electrical and electronic engineering journals
Bimonthly journals
Academic journals established in 2011
English-language journals
Document 4:::
The Journal of Microwave Power and Electromagnetic Energy is a quarterly peer-reviewed scientific journal covering industrial, medical, and scientific applications of electromagnetic and microwaves from 0.1 to 100 GHz, including topics such as food processing, instrumentation, polymer technologies, microwave chemistry and systems design.
The journal is published jointly by the International Microwave Power Institute and Taylor & Francis. Its editor-in-chief is Juan Antonio Aguilar-Garib (Autonomous University of Nuevo León). According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.5.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What waves are the broad range of electromagnetic waves with the longest wavelengths and lowest frequencies?
A. radio
B. light
C. sound
D. microwaves
Answer:
|
|
sciq-10775
|
multiple_choice
|
H2o is the chemical formula for what?
|
[
"smog",
"glass",
"water",
"salt"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities.
The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin).
In mathematics
In mathematics, a formula generally refers to an equation relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius:
Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form.
In a general context, formulas are often a manifestation of mathematical model to real world phenomena, and as such can be used to provide solution (or approximated solution) to real world problems, with some being more general than others. For example, the formula
is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations.
Expr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
H2o is the chemical formula for what?
A. smog
B. glass
C. water
D. salt
Answer:
|
|
sciq-7110
|
multiple_choice
|
What is the change frogs and butterflies go through?
|
[
"parthenogenesis",
"transformation",
"metamorphosis",
"hiatus"
] |
C
|
Relavent Documents:
Document 0:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 1:::
Direct development is a concept in biology. It refers to forms of growth to adulthood that do not involve metamorphosis. An animal undergoes direct development if the immature organism resembles a small adult rather than having a distinct larval form. A frog that hatches out of its egg as a small frog undergoes direct development. A frog that hatches out of its egg as a tadpole does not.
Direct development is the opposite of complete metamorphosis. An animal undergoes complete metamorphosis if it becomes a non-moving thing, for example a pupa in a cocoon, between its larval and adult stages.
Examples
Most frogs in the genus Callulina hatch out of their eggs as froglets.
Springtails and mayflies, called ametabolous insects, undergo direct development.
Document 2:::
Ontogenetic niche shift (abbreviated ONS) is an ecological phenomenon where an organism (usually an animal) changes its diet or habitat during its ontogeny (development). During the ontogenetic niche shifting an ecological niche of an individual changes its breadth and position. The best known representatives of taxa that exhibit some kind of the ontogenetic niche shift are fish (e.g. migration of so-called diadromous fish between saltwater and freshwater for purpose of breeding), insects (e.g. metamorphosis between different life stages; such as larva, pupa and imago) and amphibians (e.g. metamorphosis from tadpole to adult frog). A niche shift is thought to be determined genetically, while also being irreversible. Important aspect of the ONS is the fact, that individuals of different stages of a population (e.g. of various age or size) utilize different kind of resources and habitats. The term was introduced in a 1984 paper by biologists Earl E. Werner and James F. Gilliam.
Characteristics
The ontogenetic niche shift is thought to be determined genetically, while also being irreversible. In complex natural systems the ONS happens multiple times in lifetime of an individual (in some examples the ontogenetic niche shifting can occur continuously). The ontogenetic niche shift varies across species; in some it is hardly visible and gradual (for example a change in diet or in size in mammals and reptiles), while in others it is obvious and abrupt (the metamorphosis of insects, which often results in changing habitat, diet and other ecological conditions). One of the studies suggests that differences in the ONS across species could be (at least to some degree) explained by diversity of traits and functional roles of a species. As a consequence differences in ontogenetic niche shifting are thought to follow some general patterns.
Importance
For communities
It is thought that almost every organism shows some kind of ontogenetic niche shift. The ONS, which is respons
Document 3:::
Merriam-Webster defines chemotaxonomy as the method of biological classification based on similarities and dissimilarity in the structure of certain compounds among the organisms being classified. Advocates argue that, as proteins are more closely controlled by genes and less subjected to natural selection than the anatomical features, they are more reliable indicators of genetic relationships. The compounds studied most are proteins, amino acids, nucleic acids, peptides etc.
Physiology is the study of working of organs in a living being. Since working of the organs involves chemicals of the body, these compounds are called biochemical evidences. The study of morphological change has shown that there are changes in the structure of animals which result in evolution. When changes take place in the structure of a living organism, they will naturally be accompanied by changes in the physiological or biochemical processes.
John Griffith Vaughan was one of the pioneers of chemotaxonomy.
Biochemical products
The body of any animal in the animal kingdom is made up of a number of chemicals. Of these, only a few biochemical products have been taken into consideration to derive evidence for evolution.
Protoplasm: Every living cell, from a bacterium to an elephant, from grasses to the blue whale, has protoplasm. Though the complexity and constituents of the protoplasm increases from lower to higher living organism, the basic compound is always the protoplasm. Evolutionary significance: From this evidence, it is clear that all living things have a common origin point or a common ancestor, which in turn had protoplasm. Its complexity increased due to changes in the mode of life and habitat.
Nucleic acids: DNA and RNA are the two types of nucleic acids present in all living organisms. They are present in the chromosomes. The structure of these acids has been found to be similar in all animals. DNA always has two chains forming a double helix, and each chain is made up of nuc
Document 4:::
A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names).
Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults.
Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs.
In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity.
Examples
For animal larval juveniles, see larva
Juvenile birds or bats can be called fledglings
For cat juveniles, see kitten
For dog juveniles, see puppy
For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the change frogs and butterflies go through?
A. parthenogenesis
B. transformation
C. metamorphosis
D. hiatus
Answer:
|
|
sciq-7162
|
multiple_choice
|
What are electrons at the outermost energy level of an atom are called?
|
[
"ions",
"valence electrons",
"shell electrons",
"core electrons"
] |
B
|
Relavent Documents:
Document 0:::
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the " shell" (also called "K shell"), followed by the " shell" (or "L shell"), then the " shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N...).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.
If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative pot
Document 1:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 2:::
Core electrons are the electrons in an atom that are not valence electrons and do not participate in chemical bonding. The nucleus and the core electrons of an atom form the atomic core. Core electrons are tightly bound to the nucleus. Therefore, unlike valence electrons, core electrons play a secondary role in chemical bonding and reactions by screening the positive charge of the atomic nucleus from the valence electrons.
The number of valence electrons of an element can be determined by the periodic table group of the element (see valence electron):
For main-group elements, the number of valence electrons ranges from 1 to 8 (ns and np orbitals).
For transition metals, the number of valence electrons ranges from 3 to 12 (ns and (n−1)d orbitals).
For lanthanides and actinides, the number of valence electrons ranges from 3 to 16 (ns, (n−2)f and (n−1)d orbitals).
All other non-valence electrons for an atom of that element are considered core electrons.
Orbital theory
A more complex explanation of the difference between core and valence electrons can be described with atomic orbital theory.
In atoms with a single electron the energy of an orbital is determined exclusively by the principle quantum number n. The n = 1 orbital has the lowest possible energy in the atom. For large n, the energy increases so much that the electron can easily escape from the atom. In single electron atoms, all energy levels with the same principle quantum number are degenerate, and have the same energy.
In atoms with more than one electron, the energy of an electron depends not only on the properties of the orbital it resides in, but also on its interactions with the other electrons in other orbitals. This requires consideration of the ℓ quantum number. Higher values of ℓ are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When ℓ = 2, the increase in energy of the orbital becomes large enough to push the energy of orbital above the energy
Document 3:::
In chemistry and atomic physics, an electron shell may be thought of as an orbit that electrons follow around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called the "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond to the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with the letters used in X-ray notation (K, L, M, ...). A useful guide when understanding electron shells in atoms is to note that each row on the conventional periodic table of elements represents an electron shell.
Each shell can contain only a fixed number of electrons: the first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2(n2) electrons. For an explanation of why electrons exist in these shells, see electron configuration.
Each shell consists of one or more subshells, and each subshell consists of one or more atomic orbitals.
History
In 1913 Bohr proposed a model of the atom, giving the arrangement of electrons in their sequential orbits. At that time, Bohr allowed the capacity of the inner orbit of the atom to increase to eight electrons as the atoms got larger, and "in the scheme given below the number of electrons in this [outer] ring is arbitrary put equal to the normal valency of the corresponding element." Using these and other constraints, he proposed configurations that are in accord with those now known only for the first six elements. "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
The shell terminology comes from Arnold Sommerfeld's modification of the 1913 Bohr model. During this period Bohr was working with Walther Kossel, whose papers in 1914 and in 1916 called the or
Document 4:::
In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron.
The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell.
An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond.
Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.
Overview
Electron configuration
The electrons that determine valence – how an atom reacts chemically – are those with the highest energy.
For a main-group element, the valence electrons are defined as those electrons residing in the e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are electrons at the outermost energy level of an atom are called?
A. ions
B. valence electrons
C. shell electrons
D. core electrons
Answer:
|
|
ai2_arc-1060
|
multiple_choice
|
Vegetables can be scientifically classified by all of these except
|
[
"size.",
"color.",
"shape of plant parts.",
"whether they taste good."
] |
D
|
Relavent Documents:
Document 0:::
Chard or Swiss chard (; Beta vulgaris subsp. vulgaris, Cicla Group and Flavescens Group) is a green leafy vegetable. In the cultivars of the Flavescens Group, the leaf stalks are large and often prepared separately from the leaf blade; the Cicla Group is the leafy spinach beet. The leaf blade can be green or reddish; the leaf stalks are usually white, yellow or red.
Chard, like other green leafy vegetables, has highly nutritious leaves. Chard has been used in cooking for centuries, but because it is the same species as beetroot, the common names that cooks and cultures have used for chard may be confusing; it has many common names, such as silver beet, perpetual spinach, beet spinach, seakale beet, or leaf beet.
Classification
Chard was first described in 1753 by Carl Linnaeus as Beta vulgaris var. cicla. Its taxonomic rank has changed many times: it has been treated as a subspecies, a convariety, and a variety of Beta vulgaris. (Among the numerous synonyms for it are Beta vulgaris subsp. cicla (L.) W.D.J. Koch (Cicla Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. cicla L., B. vulgaris var. cycla (L.) Ulrich, B. vulgaris subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Spinach Beet Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch (Flavescens Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. flavescens (Lam.) DC., B. vulgaris L. subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Swiss Chard Group)). The accepted name for all beet cultivars, like chard, sugar beet and beetroot, is Beta vulgaris subsp. vulgaris. They are cultivated descendants of the sea beet, Beta vulgaris subsp. maritima. Chard belongs to the chenopods, which are now mostly included in the family Amaranthaceae (sensu lato).
Document 1:::
Multiplex sensor is a hand-held multiparametric optical sensor developed by Force-A. The sensor is a result of 15 years of research on plant autofluorescence conducted by the CNRS (National Center for Scientific Research) and University of Paris-Sud Orsay. It provides accurate and complete information on the physiological state of the crop, allowing real-time and non-destructive measurements of chlorophyll and polyphenols contents in leaves and fruits.
Technology
Multiplex assesses the chlorophyll and polyphenols indices by making use of two attributes of plant fluorescence: the effect of fluorescence re-absorption by chlorophyll and screening effect of polyphenols.
The sensor is an optical head which contains:
Optical sources (UV, blue, green and red)
Detectors (blue-green or yellow, red and far-red (NIR))
Applications
Alongside with other data, Multiplex is designed to provide input for decision support systems (DSS) for a range of crops, including:
Fertilization applications
Crop quality assessments (nitrogen status, maturity, freshness and disease detection)
As a standalone sensor, Multiplex is a tool for rapid collection of information concerning chlorophyll and flavonoids contents of the plant to be applied on ecophysiological research.
Document 2:::
The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions. The information that it contains can be searched by variety name, or by selecting one or more required characteristics.
159,848 observations
29 contributors
91 characters
4,119 cultivated varieties
1,354 breeding lines
The data is indexed by variety, character, country of origin, and contributor. There is a facility to select a variety and to find similar varieties based upon botanical characteristics.
ECPD is the result of collaboration between participants in eight European Union countries and five East European countries. It is intended to be a source of information on varieties maintained by them. More than twenty-three scientific organisations are contributing to this information source.
The database is maintained and updated by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR), which is organised by Bioversity International. The European Cultivated Potato Database was created to advance the conservation and use of genetic diversity for the well-being of present and future generations.
External links
The European Cultivated Potato Database
Biodiversity databases
Databases in Scotland
Government databases in the United Kingdom
Information technology organizations based in Europe
Online databases
Potatoes
Document 3:::
Ampelography (ἄμπελος, "vine" + γράφος, "writing") is the field of botany concerned with the identification and classification of grapevines, Vitis spp. Traditionally this has been done by comparing the shape and colour of the vine leaves and grape berries; more recently the study of vines has been revolutionised by DNA fingerprinting.
Early history
The grape vine is an extremely variable species and some varieties, such as Pinot, mutate particularly frequently. At the same time, the wine and table grape industries have been important since ancient times, so large sums of money can depend on the correct identification of different varieties and clones of grapevines.
The science of ampelography began seriously in the 19th century, when it became important to understand more about the different species of vine, as they had very different resistance to disease and pests such as phylloxera.
Many vine identification books were published at this time, one of which is Victor Rendu's Ampélographie française of 1857, featuring hand-colored lithographs by Eugene Grobon.
Pierre Galet
Until the Second World War, ampelography had been an art. Then Pierre Galet of the École nationale supérieure agronomique de Montpellier made a systematic assembly of criteria for the identification of vines. The Galet system was based on the shape and contours of the leaves, the characteristics of growing shoots, shoot tips, petioles, the sex of the flowers, the shape of the grape clusters and the colour, size and pips of the grapes themselves. The grapes are less affected by environmental factors than the leaves and the shoots, but are obviously not around for as long. He even included grape flavour as a criterion, but this is rather subjective.
Galet then published the definitive book, Ampélographie pratique, in 1952, featuring 9,600 types of vine. Ampélographie pratique was translated into English by Lucie Morton, published in 1979 and updated in 2000.
Illustrated Historical Universal Am
Document 4:::
Olericulture is the science of vegetable growing, dealing with the culture of non-woody (herbaceous) plants for food.
Olericulture is the production of plants for use of the edible parts. Vegetable crops can be classified into nine major categories:
Potherbs and greens – spinach and collards
Salad crops – lettuce, celery
Cole crops – cabbage and cauliflower
Root crops (tubers) – potatoes, beets, carrots, radishes
Bulb crops – onions, leeks
Legumes – beans, peas
Cucurbits – melons, squash, cucumber
Solanaceous crops – tomatoes, peppers, potatoes
Sweet corn
Olericulture deals with the production, storage, processing and marketing of vegetables. It encompasses crop establishment, including cultivar selection, seedbed preparation and establishment of vegetable crops by seed and transplants.
It also includes maintenance and care of vegetable crops as well commercial and non-traditional vegetable crop production including organic gardening and organic farming; sustainable agriculture and horticulture; hydroponics; and biotechnology.
See also
Agriculture – the cultivation of animals, plants, fungi and other life forms for food, fiber, and other products used to sustain life.
Horticulture – the industry and science of plant cultivation including the process of preparing soil for the planting of seeds, tubers, or cuttings.
Pomology – a branch of botany that studies and cultivates pome fruit, and sometimes applied more broadly, to the cultivation of any type of fruit.
Tropical horticulture – a branch of horticulture that studies and cultivates garden plants in the tropics, i.e., the equatorial regions of the world.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Vegetables can be scientifically classified by all of these except
A. size.
B. color.
C. shape of plant parts.
D. whether they taste good.
Answer:
|
|
sciq-1457
|
multiple_choice
|
What produces almost one-half of the earth's oxygen through photosynthesis?
|
[
"prokaryotes",
"protists",
"algae",
"arthropods"
] |
B
|
Relavent Documents:
Document 0:::
In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not.
Overview
Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom):
+ H2O + light → CH2O + O2
+ O2 + 4 H2S → CH2O + 4 S + 3 H2O
In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth'
Document 1:::
Cyanobacteria (), also called Cyanobacteriota or Cyanophyta, are a phylum of gram-negative bacteria that obtain energy via photosynthesis. The name cyanobacteria refers to their color (), which similarly forms the basis of cyanobacteria's common name, blue-green algae, although they are not usually scientifically classified as algae. They appear to have originated in a freshwater or terrestrial environment. Sericytochromatia, the proposed name of the paraphyletic and most basal group, is the ancestor of both the non-photosynthetic group Melainabacteria and the photosynthetic cyanobacteria, also called Oxyphotobacteria.
Cyanobacteria use photosynthetic pigments, such as carotenoids, phycobilins, and various forms of chlorophyll, which absorb energy from light. Unlike heterotrophic prokaryotes, cyanobacteria have internal membranes. These are flattened sacs called thylakoids where photosynthesis is performed. Phototrophic eukaryotes such as green plants perform photosynthesis in plastids that are thought to have their ancestry in cyanobacteria, acquired long ago via a process called endosymbiosis. These endosymbiotic cyanobacteria in eukaryotes then evolved and differentiated into specialized organelles such as chloroplasts, chromoplasts, etioplasts, and leucoplasts, collectively known as plastids.
Cyanobacteria are the first organisms known to have produced oxygen. By producing and releasing oxygen as a byproduct of photosynthesis, cyanobacteria are thought to have converted the early oxygen-poor, reducing atmosphere into an oxidizing one, causing the Great Oxidation Event and the "rusting of the Earth", which dramatically changed the composition of life forms on Earth.
The cyanobacteria Synechocystis and Cyanothece are important model organisms with potential applications in biotechnology for bioethanol production, food colorings, as a source of human and animal food, dietary supplements and raw materials. Cyanobacteria produce a range of toxins known as cyanotox
Document 2:::
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779.
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water.
Origin
Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w
Document 3:::
In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria.
Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen.
Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water.
It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later.
Hydrogen sulfide chemosynthesis process
Giant tube worms
Document 4:::
The PI (or photosynthesis-irradiance) curve is a graphical representation of the empirical relationship between solar irradiance and photosynthesis. A derivation of the Michaelis–Menten curve, it shows the generally positive correlation between light intensity and photosynthetic rate. It is a plot of photosynthetic rate as a function of light intensity (irradiance).
Introduction
The PI curve can be applied to terrestrial and marine reactions but is most commonly used to explain ocean-dwelling phytoplankton's photosynthetic response to changes in light intensity. Using this tool to approximate biological productivity is important because phytoplankton contribute ~50% of total global carbon fixation and are important suppliers to the marine food web.
Within the scientific community, the curve can be referred to as the PI, PE or Light Response Curve. While individual researchers may have their own preferences, all are readily acceptable for use in the literature. Regardless of nomenclature, the photosynthetic rate in question can be described in terms of carbon (C) fixed per unit per time. Since individuals vary in size, it is also useful to normalise C concentration to Chlorophyll a (an important photosynthetic pigment) to account for specific biomass.
History
As far back as 1905, marine researchers attempted to develop an equation to be used as the standard in establishing the relationship between solar irradiance and photosynthetic production. Several groups had relative success, but in 1976 a comparison study conducted by Alan Jassby and Trevor Platt, researchers at the Bedford Institute of Oceanography in Dartmouth, Nova Scotia, reached a conclusion that solidified the way in which a PI curve is developed. After evaluating the eight most-used equations, Jassby and Platt argued that the PI curve can be best approximated by a hyperbolic tangent function, at least until photoinhibition is reached.
Equations
There are two simple derivations of the equatio
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What produces almost one-half of the earth's oxygen through photosynthesis?
A. prokaryotes
B. protists
C. algae
D. arthropods
Answer:
|
|
sciq-1023
|
multiple_choice
|
Hydrogen peroxide will decompose over time to produce _______ gas.
|
[
"methane",
"water and oxygen",
"hydrogen and helium",
"water and carbon dioxide"
] |
B
|
Relavent Documents:
Document 0:::
Biogas is a gaseous renewable energy source produced from raw materials such as agricultural waste, manure, municipal waste, plant material, sewage, green waste, wastewater, and food waste. Biogas is produced by anaerobic digestion with anaerobic organisms or methanogens inside an anaerobic digester, biodigester or a bioreactor.
The gas composition is primarily methane () and carbon dioxide () and may have small amounts of hydrogen sulfide (), moisture and siloxanes. The gases methane and hydrogen can be combusted or oxidized with oxygen. This energy release allows biogas to be used as a fuel; it can be used in fuel cells and for heating purpose, such as in cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat.
After removal of carbon dioxide and hydrogen sulfide it can be compressed in the same way as natural gas and used to power motor vehicles. In the United Kingdom, for example, biogas is estimated to have the potential to replace around 17% of vehicle fuel. It qualifies for renewable energy subsidies in some parts of the world. Biogas can be cleaned and upgraded to natural gas standards, when it becomes bio-methane. Biogas is considered to be a renewable resource because its production-and-use cycle is continuous, and it generates no net carbon dioxide. From a carbon perspective, as much carbon dioxide is absorbed from the atmosphere in the growth of the primary bio-resource as is released, when the material is ultimately converted to energy.
Production
Biogas is produced by microorganisms, such as methanogens and sulfate-reducing bacteria, performing anaerobic respiration. Biogas can refer to gas produced naturally and industrially.
Natural
In soil, methane is produced in anaerobic environments by methanogens, but is mostly consumed in aerobic zones by methanotrophs. Methane emissions result when the balance favors methanogens. Wetland soils are the main natural source of methane. Other sources include ocea
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Endothermic gas is a gas that inhibits or reverses oxidation on the surfaces it is in contact with. This gas is the product of incomplete combustion in a controlled environment. An example mixture is hydrogen gas (H2), nitrogen gas (N2), and carbon monoxide (CO). The hydrogen and carbon monoxide are reducing agents, so they work together to shield surfaces from oxidation.
Endothermic gas is often used as a carrier gas for gas carburizing and carbonitriding. An endothermic gas generator could be used to supply heat to form an endothermic reaction.
Synthesised in the catalytic retort(s) of endothermic generators, the gas in the endothermic atmosphere is combined with an additive gas including natural gas, propane (C3H8) or air and is then used to improve the surface chemistry work positioned in the furnace.
Purposes
There are two common purposes of the atmospheres in the heat treating industry:
Protect the processed material from surface reactions (chemically inert)
Allow surface of processed material to change (chemically reactive)
Principal components of a endothermic gas generator
Principal components of endothermic gas generators:
Heating chamber for supplying heat by electric heating elements of combustion,
Vertical cylindrical retorts,
Tiny, porous ceramic pieces that are saturated with nickel, which acts as a catalyst for the reaction,
Cooling heat exchanger in order to cool the products of the reaction as quickly as possible so that it reaches a particular temperature which stops any further reaction,
Control system which will help maintain the consistency of the temperature of the reaction which will help adjust the gas ratio, providing the wanted dew point.
Chemical composition
Chemistry of endothermic gas generators:
N2 (nitrogen) → 45.1% (volume)
CO (carbon monoxide) → 19.6% (volume)
CO2 (carbon dioxide) → 0.4% (volume)
H2 (hydrogen) → 34.6% (volume)
CH4 (methane) → 0.3% (volume)
Dew point → +20/+50
Gas ratio → 2.6:1
Applications
Document 3:::
Butane () or n-butane is an alkane with the formula C4H10. Butane is a highly flammable, colorless, easily liquefied gas that quickly vaporizes at room temperature and pressure. The name butane comes from the root but- (from butyric acid, named after the Greek word for butter) and the suffix -ane. It was discovered in crude petroleum in 1864 by Edmund Ronalds, who was the first to describe its properties, and commercialized by Walter O. Snelling in early 1910s.
Butane is one of a group of liquefied petroleum gases (LP gases). The others include propane, propylene, butadiene, butylene, isobutylene, and mixtures thereof. Butane burns more cleanly than both gasoline and coal.
History
The first synthesis of butane was accidentally achieved by British chemist Edward Frankland in 1849 from ethyl iodide and zinc, but he had not realized that the ethyl radical dimerized and misidentified the substance.
The proper discoverer of the butane called it "hydride of butyl", but already in the 1860s more names were used: "butyl hydride", "hydride of tetryl" and "tetryl hydride", "diethyl" or "ethyl ethylide" and others. August Wilhelm von Hofmann in his 1866 systemic nomenclature proposed the name "quartane", and the modern name was introduced to English from German around 1874.
Butane did not have much practical use until the 1910s, when W. Snelling identified butane and propane as components in gasoline and found that, if they were cooled, they could be stored in a volume-reduced liquified state in pressurized containers.
Density
The density of butane is highly dependent on temperature and pressure in the reservoir. For example, the density of liquid propane is 571.8±1 kg/m3 (for pressures up to 2MPa and temperature 27±0.2 °C), while the density of liquid butane is 625.5±0.7 kg/m3 (for pressures up to 2MPa and temperature -13±0.2 °C).
Isomers
Rotation about the central C−C bond produces two different conformations (trans and gauche) for n-butane.
Reactions
When oxyg
Document 4:::
Microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements.
Decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses.
The decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. Plant material decomposition is also often referred to as composting. The decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy.
Decomposition microbiology of plant materials
The breakdown of vegetation is highly dependent on oxygen and moisture levels. During decomposition, microorganisms require oxygen for their respiration. If anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. Appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. In arid environments, bacteria and fungi dry out and are unable to take part in decomposition. In wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. Decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. This usually translates to having appropriate carbon to nitrogen ratios (C:N). The ideal composting carbon-to-nitrogen ratio is thought to be approximately 30:1. As in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. For example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities.
Decomposition mi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Hydrogen peroxide will decompose over time to produce _______ gas.
A. methane
B. water and oxygen
C. hydrogen and helium
D. water and carbon dioxide
Answer:
|
|
sciq-301
|
multiple_choice
|
What is another name for composite volcanos?
|
[
"stratovolcanoes",
"seismic giants",
"fault lines",
"fjords"
] |
A
|
Relavent Documents:
Document 0:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 1:::
Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region.
Geology
Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago.
Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago.
At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged.
Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum.
Today, the sea floor between these four islands is relatively shallow
Document 2:::
A kīpuka is an area of land surrounded by one or more younger lava flows. A kīpuka forms when lava flows on either side of a hill, ridge, or older lava dome as it moves downslope or spreads from its source. Older and more weathered than their surroundings, kīpukas often appear to be like islands within a sea of lava flows. They are often covered with soil and late ecological successional vegetation that provide visual contrast as well as habitat for animals in an otherwise inhospitable environment. In volcanic landscapes, kīpukas play an important role as biological reservoirs or refugia for plants and animals, from which the covered land can be recolonized.
Etymology
Kīpuka, along with aā and pāhoehoe, are Hawaiian words related to volcanology that have entered the lexicon of geology. Descriptive proverbs and poetical sayings in Hawaiian oral tradition also use the word, in an allusive sense, to mean a place where life or culture endures, regardless of any encroachment or interference. By extension, from the appearance of island "patches" within a highly contrasted background, any similarly noticeable variation or change of form, such as an opening in a forest, or a clear place in a congested setting, may be colloquially called kīpuka.
Significance to research
Kīpuka provides useful study sites for ecological research because they facilitate replication; multiple kīpuka in a system (isolated by the same lava flow) will tend to have uniform substrate age and successional characteristics, but are often isolated-enough from their neighbors to provide meaningful, comparable differences in size, invasion, etc. They are also receptive to experimental treatments. Kīpuka along Saddle Road on Hawaii have served as the natural laboratory for a variety of studies, examining ecological principles like island biogeography, food web control, and biotic resistance to invasiveness. In addition, Drosophila silvestris populations inhabit kīpukas, making kīpukas useful for unders
Document 3:::
Darwin Mounds is a large field of undersea sand mounds situated off the north west coast of Scotland that were first discovered in May 1998. They provide a unique habitat for ancient deep water coral reefs and were found using remote sensing techniques during surveys funded by the oil industry and steered by the joint industry and United Kingdom government group the Atlantic Frontier Environment Network (AFEN) (Masson and Jacobs 1998). The mounds were named after the research vessel, itself named for the eminent naturalist and evolutionary theorist Charles Darwin.
The mounds are about below the surface of the North Atlantic ocean, approximately north-west of Cape Wrath, the north-west tip of mainland Scotland. There are hundreds of mounds in the field, which in total cover approximately . Individual mounds are typically circular, up to high and wide. Most of the mounds are also distinguished by the presence of an additional feature referred to as a 'tail'. The tails are of a variable extent and may merge with others, but are generally a teardrop shape and are orientated south-west of the mound. The mound-tail feature of the Darwin Mounds is apparently unique globally.
Composition
The mounds are mostly sand, currently interpreted as "sand volcanoes". These features are caused when fluidised sand "de-waters" and the fluid bubbles up through the sand, pushing the sediment up into a cone shape. Sand volcanoes are common in the Devonian fossil record in UK, and in seismically active areas of the planet. In this case, tectonic activity is unlikely; some form of slumping on the south-west side of the undersea (Wyville-Thomson) Ridge being a more likely cause. The tops of the mounds have living stands of Lophelia and blocky rubble (interpreted as coral debris). The mounds provide one of the largest known northerly cold-water habitats for coral species. The mounds are also unusual in that Lophelia pertusa, a cold water coral, appears to be growing on sand rather than a
Document 4:::
In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range.
Overview
In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates.
Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments.
An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia.
Paleontological use
When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is another name for composite volcanos?
A. stratovolcanoes
B. seismic giants
C. fault lines
D. fjords
Answer:
|
|
sciq-2134
|
multiple_choice
|
Deposition refers to when a gas changes to what state?
|
[
"plasma",
"half liquid half gas",
"liquid",
"solid"
] |
D
|
Relavent Documents:
Document 0:::
In chemistry, deposition occurs when molecules settle out of a solution.
Deposition can be viewed as a reverse process to dissolution or particle re-entrainment.
See also
Atomic layer deposition
Chemical vapor deposition
Deposition (physics)
Fouling
Physical vapor deposition
Thin-film deposition
Fused filament fabrication
Document 1:::
Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation.
Applications
Examples
One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid.
Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition.
Industrial applications
There is an industrial coatings process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid. See also physical vapor deposition, which is a class of processes used to deposit thin films of various
Document 2:::
Saturated surface dry (SSD) is defined as the condition of an aggregate in which the surfaces of the particles are "dry" (i.e., surface adsorption would no longer take place), but the inter-particle voids are saturated with water. In this condition aggregates will not affect the free water content of a composite material.
The water adsorption by mass (Am)) is defined in terms of the mass of saturated-surface-dry (Mssd) sample and the mass of oven dried test sample (Mdry) by
See also
Construction aggregate
Document 3:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 4:::
In chemistry, absorption is a physical or chemical phenomenon or a process in which atoms, molecules or ions enter some bulk phase – liquid or solid material. This is a different process from adsorption, since molecules undergoing absorption are taken up by the volume, not by the surface (as in the case for adsorption).
A more common definition is that "Absorption is a chemical or physical phenomenon in which the molecules, atoms and ions of the substance getting absorbed enter into the bulk phase (gas, liquid or solid) of the material in which it is taken up."
A more general term is sorption, which covers absorption, adsorption, and ion exchange. Absorption is a condition in which something takes in another substance.
In many processes important in technology, the chemical absorption is used in place of the physical process, e.g., absorption of carbon dioxide by sodium hydroxide – such acid-base processes do not follow the Nernst partition law (see: solubility).
For some examples of this effect, see liquid-liquid extraction. It is possible to extract a solute from one liquid phase to another without a chemical reaction. Examples of such solutes are noble gases and osmium tetroxide.
The process of absorption means that a substance captures and transforms energy. The absorbent distributes the material it captures throughout whole and adsorbent only distributes it through the surface.
The process of gas or liquid which penetrate into the body of adsorbent is commonly known as absorption.
Equation
If absorption is a physical process not accompanied by any other physical or chemical process, it usually follows the Nernst distribution law:
"the ratio of concentrations of some solute species in two bulk phases when it is equilibrium and in contact is constant for a given solute and bulk phases":
The value of constant KN depends on temperature and is called partition coefficient. This equation is valid if concentrations are not too large and if the species "x"
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Deposition refers to when a gas changes to what state?
A. plasma
B. half liquid half gas
C. liquid
D. solid
Answer:
|
|
sciq-10246
|
multiple_choice
|
Graphite is a form of elemental carbon what is another form?
|
[
"magnite",
"diamond",
"iron",
"carbonite"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Graphite is a form of elemental carbon what is another form?
A. magnite
B. diamond
C. iron
D. carbonite
Answer:
|
|
sciq-8654
|
multiple_choice
|
The sum of the superscripts in an electron configuration is equal to the number of electrons in that atom, which is in turn equal to what number?
|
[
"shell number",
"orbital number",
"atomic number",
"metallic number"
] |
C
|
Relavent Documents:
Document 0:::
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals. For example, the electron configuration of the neon atom is , meaning that the 1s, 2s and 2p subshells are occupied by 2, 2 and 6 electrons respectively.
Electronic configurations describe each electron as moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, a level of energy is associated with each electron configuration and in certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together, and for understanding the chemical formulas of compounds and the geometries of molecules. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells
Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons. For example, the first shell can accommodate 2 electrons, the second shell 8 electrons, the third shell 18 electrons and so on. The factor of two arises because the allowed states are doubled due to electron spin—each
Document 1:::
In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers assigned to each electron in an atom to describe that electron's state. Its values are natural numbers (from 1) making it a discrete variable.
Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number ℓ, the magnetic quantum number ml, and the spin quantum number s.
Overview and history
As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n the electron is farther from the nucleus, on average. For each value of n there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 inclusively, hence higher-n electron states are more numerous. Accounting for two states of spin, each n-shell can accommodate up to 2n2 electrons.
In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n, leading to degenerate energy levels for each n > 1. In more complex systems—those having forces other than the nucleus–electron Coulomb force—these levels split. For multielectron atoms this splitting results in "subshells" parametrized by ℓ. Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 (boron) and fails completely on potassium (Z = 19) and afterwards.
The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, ℓ, m, and s specify the complete and unique quantum state of a single electron in a
Document 2:::
This page shows the electron configurations of the neutral gaseous atoms in their ground states. For each atom the subshells are given first in concise form, then with all subshells written out, followed by the number of electrons per shell. Electron configurations of elements beyond hassium (element 108) have never been measured; predictions are used below.
As an approximate rule, electron configurations are given by the Aufbau principle and the Madelung rule. However there are numerous exceptions; for example the lightest exception is chromium, which would be predicted to have the configuration , written as , but whose actual configuration given in the table below is .
Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the irregularities shown below do not necessarily have a clear relation to chemical behaviour. For the undiscovered eighth-row elements, mixing of configurations is expected to be very important, and sometimes the result can no longer be well-described by a single configuration.
See also
Extended periodic table#Electron configurations – Predictions for undiscovered elements 119–173 and 184
Document 3:::
In chemistry and atomic physics, an electron shell may be thought of as an orbit that electrons follow around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called the "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond to the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with the letters used in X-ray notation (K, L, M, ...). A useful guide when understanding electron shells in atoms is to note that each row on the conventional periodic table of elements represents an electron shell.
Each shell can contain only a fixed number of electrons: the first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2(n2) electrons. For an explanation of why electrons exist in these shells, see electron configuration.
Each shell consists of one or more subshells, and each subshell consists of one or more atomic orbitals.
History
In 1913 Bohr proposed a model of the atom, giving the arrangement of electrons in their sequential orbits. At that time, Bohr allowed the capacity of the inner orbit of the atom to increase to eight electrons as the atoms got larger, and "in the scheme given below the number of electrons in this [outer] ring is arbitrary put equal to the normal valency of the corresponding element." Using these and other constraints, he proposed configurations that are in accord with those now known only for the first six elements. "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
The shell terminology comes from Arnold Sommerfeld's modification of the 1913 Bohr model. During this period Bohr was working with Walther Kossel, whose papers in 1914 and in 1916 called the or
Document 4:::
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the " shell" (also called "K shell"), followed by the " shell" (or "L shell"), then the " shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N...).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.
If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative pot
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The sum of the superscripts in an electron configuration is equal to the number of electrons in that atom, which is in turn equal to what number?
A. shell number
B. orbital number
C. atomic number
D. metallic number
Answer:
|
|
sciq-11473
|
multiple_choice
|
Energy transferred solely due to a temperature difference is called?
|
[
"chemical energy",
"humidity",
"magnetic energy",
"heat"
] |
D
|
Relavent Documents:
Document 0:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 1:::
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat
Document 2:::
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t
Document 3:::
In thermodynamics, heat is the thermal energy transferred between systems due to a temperature difference. In colloquial use, heat sometimes refers to thermal energy itself. Thermal energy is the kinetic energy of vibrating and colliding atoms in a substance.
An example of formal vs. informal usage may be obtained from the right-hand photo, in which the metal bar is "conducting heat" from its hot end to its cold end, but if the metal bar is considered a thermodynamic system, then the energy flowing within the metal bar is called internal energy, not heat. The hot metal bar is also transferring heat to its surroundings, a correct statement for both the strict and loose meanings of heat. Another example of informal usage is the term heat content, used despite the fact that physics defines heat as energy transfer. More accurately, it is thermal energy that is contained in the system or body, as it is stored in the microscopic degrees of freedom of the modes of vibration.
Heat is energy in transfer to or from a thermodynamic system, by a mechanism that involves the microscopic atomic modes of motion or the corresponding macroscopic properties. This descriptive characterization excludes the transfers of energy by thermodynamic work or mass transfer. Defined quantitatively, the heat involved in a process is the difference in internal energy between the final and initial states of a system, and subtracting the work done in the process. This is the formulation of the first law of thermodynamics.
The measurement of energy transferred as heat is called calorimetry, performed by measuring its effect on the states of interacting bodies. For example, heat can be measured by the amount of ice melted, or by change in temperature of a body in the surroundings of the system.
In the International System of Units (SI) the unit of measurement for heat, as a form of energy, is the joule (J).
Notation and units
As a form of energy, heat has the unit joule (J) in the International Sy
Document 4:::
Dielectric heating, also known as electronic heating, radio frequency heating, and high-frequency heating, is the process in which a radio frequency (RF) alternating electric field, or radio wave or microwave electromagnetic radiation heats a dielectric material. At higher frequencies, this heating is caused by molecular dipole rotation within the dielectric.
Mechanism
Molecular rotation occurs in materials containing polar molecules having an electrical dipole moment, with the consequence that they will align themselves in an electromagnetic field. If the field is oscillating, as it is in an electromagnetic wave or in a rapidly oscillating electric field, these molecules rotate continuously by aligning with it. This is called dipole rotation, or dipolar polarisation. As the field alternates, the molecules reverse direction. Rotating molecules push, pull, and collide with other molecules (through electrical forces), distributing the energy to adjacent molecules and atoms in the material. The process of energy transfer from the source to the sample is a form of radiative heating.
Temperature is related to the average kinetic energy (energy of motion) of the atoms or molecules in a material, so agitating the molecules in this way increases the temperature of the material. Thus, dipole rotation is a mechanism by which energy in the form of electromagnetic radiation can raise the temperature of an object. There are also many other mechanisms by which this conversion occurs.
Dipole rotation is the mechanism normally referred to as dielectric heating, and is most widely observable in the microwave oven where it operates most effectively on liquid water, and also, but much less so, on fats and sugars. This is because fats and sugar molecules are far less polar than water molecules, and thus less affected by the forces generated by the alternating electromagnetic fields. Outside of cooking, the effect can be used generally to heat solids, liquids, or gases, provided th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Energy transferred solely due to a temperature difference is called?
A. chemical energy
B. humidity
C. magnetic energy
D. heat
Answer:
|
|
sciq-8337
|
multiple_choice
|
Which instruments are used to measure the angle of the slope of a volcano?
|
[
"altimeter",
"compass",
"calipers",
"tiltmeters"
] |
D
|
Relavent Documents:
Document 0:::
A mathematical instrument is a tool or device used in the study or practice of mathematics. In geometry, construction of various proofs was done using only a compass and straightedge; arguments in these proofs relied only on idealized properties of these instruments and literal construction was regarded as only an approximation. In applied mathematics, mathematical instruments were used for measuring angles and distances, in astronomy, navigation, surveying and in the measurement of time.
Overview
Instruments such as the astrolabe, the quadrant, and others were used to measure and accurately record the relative positions and movements of planets and other celestial objects. The sextant and other related instruments were essential for navigation at sea.
Most instruments are used within the field of geometry, including the ruler, dividers, protractor, set square, compass, ellipsograph, T-square and opisometer. Others are used in arithmetic (for example the abacus, slide rule and calculator) or in algebra (the integraph). In astronomy, many have said the pyramids (along with Stonehenge) were actually instruments used for tracking the stars over long periods or for the annual planting seasons.
In schools
The Oxford Set of Mathematical Instruments is a set of instruments used by generations of school children in the United Kingdom and around the world in mathematics and geometry lessons. It includes two set squares, a 180° protractor, a 15 cm ruler, a metal compass, a 9 cm pencil, a pencil sharpener, an eraser and a 10mm stencil.
See also
The Construction and Principal Uses of Mathematical Instruments
Dividing engine
Measuring instrument
Planimeter
Integraph
Document 1:::
The grade (also called slope, incline, gradient, mainfall, pitch or rise) of a physical feature, landform or constructed line refers to the tangent of the angle of that surface to the horizontal. It is a special case of the slope, where zero indicates horizontality. A larger number indicates higher or steeper degree of "tilt". Often slope is calculated as a ratio of "rise" to "run", or as a fraction ("rise over run") in which run is the horizontal distance (not the distance along the slope) and rise is the vertical distance.
Slopes of existing physical features such as canyons and hillsides, stream and river banks and beds are often described as grades, but typically grades are used for human-made surfaces such as roads, landscape grading, roof pitches, railroads, aqueducts, and pedestrian or bicycle routes. The grade may refer to the longitudinal slope or the perpendicular cross slope.
Nomenclature
There are several ways to express slope:
as an angle of inclination to the horizontal. (This is the angle opposite the "rise" side of a triangle with a right angle between vertical rise and horizontal run.)
as a percentage, the formula for which is which is equivalent to the tangent of the angle of inclination times 100. In Europe and the U.S. percentage "grade" is the most commonly used figure for describing slopes.
as a per mille figure (‰), the formula for which is which could also be expressed as the tangent of the angle of inclination times 1000. This is commonly used in Europe to denote the incline of a railway. It is sometimes written as mm/m instead of the ‰ symbol.
as a ratio of one part rise to so many parts run. For example, a slope that has a rise of 5 feet for every 1000 feet of run would have a slope ratio of 1 in 200. (The word "in" is normally used rather than the mathematical ratio notation of "1:200".) This is generally the method used to describe railway grades in Australia and the UK. It is used for roads in Hong Kong, and was used for roa
Document 2:::
In surveying, a gyrotheodolite (also: surveying gyro) is an instrument composed of a gyrocompass mounted to a theodolite. It is used to determine the orientation of true north. It is the main instrument for orientation in mine surveying and in tunnel engineering, where astronomical star sights are not visible and GPS does not work.
History
In 1852, the French physicist Léon Foucault discovered that a gyro with two degrees of freedom points north. This principle was adapted by Max Schuler in 1921 to build the first surveying gyro. In 1949, the gyro-theodolite – at that time called a "meridian pointer" or "meridian indicator" – was first used by the Clausthal Mining Academy underground. Several years later it was improved with the addition of autocollimation telescopes. In 1960, the Fennel Kassel company produced the first of the KT1 series of gyro-theodolites. Fennel Kassel and others later produced gyro attachments that can be mounted on normal theodolites.
Operation
A gyroscope is mounted in a sphere, lined with Mu-metal to reduce magnetic influence, connected by a spindle to the vertical axis of the theodolite. The battery-powered gyro wheel is rotated at 20,000 rpm or more, until it acts as a north-seeking gyroscope. A separate optical system within the attachment permits the operator to rotate the theodolite and thereby bring a zero mark on the attachment into coincidence with the gyroscope spin axis. By tracking the spin axis as it oscillates about the meridian, a record of the azimuth of a series of the extreme stationary points of that oscillation may be determined by reading the theodolite azimuth circle. A midpoint can later be computed from these records that represents a refined estimate of the meridian. Careful setup and repeated observations can give an estimate that is within about 10 arc seconds of the true meridian. This estimate of the meridian contains errors due to the zero torque of the suspension not being aligned precisely with the true mer
Document 3:::
The Ramsden surveying instruments are those constructed by Jesse Ramsden and used in high precision geodetic surveys carried out in the period 1784 to 1853. This includes the five great theodolites—great in name, great in size and great in accuracy—used in surveys of Britain and other parts of the world. Ramsden also provided the equipment used in the measurement of the many base lines of these surveys and also the zenith telescope used in latitude determinations.
The great theodolites
A total of eight such instruments were manufactured by Ramsden and others for use in Britain, India and Switzerland.
Ramsden himself constructed three theodolites and a further two were completed to his design by Mathew Berge, his son-in-law and business successor, after Ramsden's death in 1805. Of the other instruments one was constructed by William Cary and the other two by the firm of Troughton and Simms.
The Royal Society theodolite
In 1783 the Royal Society of London reacted to (unfounded) French criticism of Greenwich Observatory by seeking Royal assent to undertake a high precision geodetic survey, the Anglo-French Survey (1784–1790), between Greenwich and the established French survey stations on the other side of the English Channel. Approval having been granted, General William Roy agreed to undertake the work and he immediately approached Ramsden to commission new instruments. Three years later the "great" theodolite was delivered after a delay attributable to Ramsden's tardiness, workshop accidents and his predilection for continuous refinement—"this won't do, we must have at it again". The instrument was paid for by the Crown and the King immediately presented it to the Royal Society; for this reason the theodolite is designated as the Royal Society theodolite, or Ramsden RS in short.
There is a complete description of this theodolite in the final report of the Anglo-French Survey (1784–1790). The instrument was large, across and it was normally mounted on a stand whi
Document 4:::
A scale of chords may be used to set or read an angle in the absence of a protractor. To draw an angle, compasses describe an arc from origin with a radius taken from the 60 mark. The required angle is copied from the scale by the compasses, and an arc of this radius drawn from the sixty mark so it intersects the first arc. The line drawn from this point to the origin will be at the target angle.
Mathematics
A chord is a line drawn between two points on the circumference of a circle. Look at the centre point of this line. For a circle of radius , each half will be so the chord will be . The line of chords scale represents each of these values linearly on a scale running from 0 to 60.
Availability
It appears on Gunter's scale and the Foster Serle dialing scales. The commercial company Stanley marketed a metal version (Stanley 60R Line of Chords Rule) in 2015.
See also
Ptolemy's table of chords
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which instruments are used to measure the angle of the slope of a volcano?
A. altimeter
B. compass
C. calipers
D. tiltmeters
Answer:
|
|
sciq-1928
|
multiple_choice
|
Which side of the heart does blood from the lungs enter into?
|
[
"right atrium",
"right ventricle",
"left ventricle",
"left atrium"
] |
D
|
Relavent Documents:
Document 0:::
The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit.
The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation.
The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins.
A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung.
Structure
De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery.
Lungs
The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart.
Veins
Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the
Document 1:::
The right border of the heart (right margin of heart) is a long border on the surface of the heart, and is formed by the right atrium.
The atrial portion is rounded and almost vertical; it is situated behind the third, fourth, and fifth right costal cartilages about 1.25 cm. from the margin of the sternum.
The ventricular portion, thin and sharp, is named the acute margin; it is nearly horizontal, and extends from the sternal end of the sixth right coastal cartilage to the apex of the heart.
Document 2:::
A ventricle is one of two large chambers toward the bottom of the heart that collect and expel blood towards the peripheral beds within the body and lungs. The blood pumped by a ventricle is supplied by an atrium, an adjacent chamber in the upper heart that is smaller than a ventricle. Interventricular means between the ventricles (for example the interventricular septum), while intraventricular means within one ventricle (for example an intraventricular block).
In a four-chambered heart, such as that in humans, there are two ventricles that operate in a double circulatory system: the right ventricle pumps blood into the pulmonary circulation to the lungs, and the left ventricle pumps blood into the systemic circulation through the aorta.
Structure
Ventricles have thicker walls than atria and generate higher blood pressures. The physiological load on the ventricles requiring pumping of blood throughout the body and lungs is much greater than the pressure generated by the atria to fill the ventricles. Further, the left ventricle has thicker walls than the right because it needs to pump blood to most of the body while the right ventricle fills only the lungs.
On the inner walls of the ventricles are irregular muscular columns called trabeculae carneae which cover all of the inner ventricular surfaces except that of the conus arteriosus, in the right ventricle. There are three types of these muscles. The third type, the papillary muscles, give origin at their apices to the chordae tendinae which attach to the cusps of the tricuspid valve and to the mitral valve.
The mass of the left ventricle, as estimated by magnetic resonance imaging, averages 143 g ± 38.4 g, with a range of 87–224 g.
The right ventricle is equal in size to the left ventricle and contains roughly 85 millilitres (3 imp fl oz; 3 US fl oz) in the adult. Its upper front surface is circled and convex, and forms much of the sternocostal surface of the heart. Its under surface is flattened, forming pa
Document 3:::
The left border of heart (or obtuse margin) is formed from the rounded lateral wall of the left ventricle. It is called the 'obtuse' margin because of the obtuse angle (>90 degrees) created between the anterior part of the heart and the left side, which is formed from the rounded lateral wall of the left ventricle. Within this margin can be found the obtuse marginal artery, which is the a branch of the left circumflex artery.
It extends from a point in the second left intercostal space, about 2.5 mm. from the sternal margin, obliquely downward, with a convexity to the left, to the apex of the heart.
This is contrasted with the acute margin of the heart, which is at the border of the anterior and posterior surface, and in which the acute marginal branch of the right coronary artery is found. The angle formed here is <90 degrees, therefore an acute angle.
Document 4:::
A heart valve is a one-way valve that allows blood to flow in one direction through the chambers of the heart. Four valves are usually present in a mammalian heart and together they determine the pathway of blood flow through the heart. A heart valve opens or closes according to differential blood pressure on each side.
The four valves in the mammalian heart are two atrioventricular valves separating the upper atria from the lower ventricles – the mitral valve in the left heart, and the tricuspid valve in the right heart. The other two valves are at the entrance to the arteries leaving the heart these are the semilunar valves – the aortic valve at the aorta, and the pulmonary valve at the pulmonary artery.
The heart also has a coronary sinus valve and an inferior vena cava valve, not discussed here.
Structure
The heart valves and the chambers are lined with endocardium. Heart valves separate the atria from the ventricles, or the ventricles from a blood vessel. Heart valves are situated around the fibrous rings of the cardiac skeleton. The valves incorporate flaps called leaflets or cusps, similar to a duckbill valve or flutter valve, which are pushed open to allow blood flow and which then close together to seal and prevent backflow. The mitral valve has two cusps, whereas the others have three. There are nodules at the tips of the cusps that make the seal tighter.
The pulmonary valve has left, right, and anterior cusps. The aortic valve has left, right, and posterior cusps. The tricuspid valve has anterior, posterior, and septal cusps; and the mitral valve has just anterior and posterior cusps.
The valves of the human heart can be grouped in two sets:
Two atrioventricular valves to prevent backflow of blood from the ventricles into the atria:
Tricuspid valve or right atrioventricular valve, between the right atrium and right ventricle
Mitral valve or bicuspid valve, between the left atrium and left ventricle
Two semilunar valves to prevent the backflow o
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which side of the heart does blood from the lungs enter into?
A. right atrium
B. right ventricle
C. left ventricle
D. left atrium
Answer:
|
|
ai2_arc-778
|
multiple_choice
|
A scientist investigated the effect of workplace stress on heart disease in humans. Men of various ages were divided into two groups based on whether they described their work as very stressful or not very stressful. During the one year investigation the scientist monitored the heart health of each man. What was the bias in this investigation?
|
[
"The investigation only lasted one year.",
"The only organ studied was the heart.",
"The investigation tested only men.",
"The age of the participants varied."
] |
C
|
Relavent Documents:
Document 0:::
Statistical literacy is the ability to understand and reason with statistics and data. The abilities to understand and reason with data, or arguments that use data, are necessary for citizens to understand material presented in publications such as newspapers, television, and the Internet. However, scientists also need to develop statistical literacy so that they can both produce rigorous and reproducible research and consume it. Numeracy is an element of being statistically literate and in some models of statistical literacy, or for some populations (e.g., students in kindergarten through 12th grade/end of secondary school), it is a prerequisite skill. Being statistically literate is sometimes taken to include having the abilities to both critically evaluate statistical material and appreciate the relevance of statistically-based approaches to all aspects of life in general or to the evaluating, design, and/or production of scientific work.
Promoting statistical literacy
Each day people are inundated with statistical information from advertisements ("4 out of 5 dentists recommend"), news reports ("opinion poll show the incumbent leading by four points"), and even general conversation ("half the time I don't know what you're talking about"). Experts and advocates often use numerical claims to bolster their arguments, and statistical literacy is a necessary skill to help one decide what experts mean and which advocates to believe. This is important because statistics can be made to produce misrepresentations of data that may seem valid. The aim of statistical literacy proponents is to improve the public understanding of numbers and figures.
Health decisions are often manifest as statistical decision problems but few doctors or patients are well equipped to engage with these data.
Results of opinion polling are often cited by news organizations, but the quality of such polls varies considerably. Some understanding of the statistical technique of sampling is nec
Document 1:::
Jury or juror research is an umbrella term for the use of research methods in an attempt to gain some understanding of the juror experience in the courtroom and how jurors individually and collectively come to a determination about the guilt or otherwise of the accused.
Brief history
Historically, juries have played a significant role in the determination of issues that could not be managed via 'general social interactions' or ones which required punitive measures, retribution and/or compensation. The role of jurors and juries however, has changed over the centuries and have generally been moulded by social and cultural forces embedded in the wider communities in which they have evolved. "Although the role of juries and jurors has a somewhat chequered history, 'the jury, in one form or the other, became the formal method of proof of the guilt or [otherwise] of a person on trial, and juries remain one of the 'cornerstones' of the criminal justice system in many countries.
There are however, many debates about the efficacy of the jury system and the ability of jurors to adequately determine the guilt or otherwise of the accused. Some argue that lay individuals are incapable of digesting the often complex forensic evidence presented during a trial, others argue that any misunderstanding of the evidence is a flaw in legal cross examination and summing up. Many observe that the juror and the accused seldom can be considered 'peers' which is historically considered a fundamental precept of jury makeup. Others consider the jury system to be inherently flawed as a result of the humanity of jurors. They cite incidents in which the Judiciary have become aware of Juror assumptions made in the absence of supporting evidence, the unidentified effect on Jurors of stereotyping, culture, gender, age, education etc., which can and have influenced their ability to make a decision from an objective stance. These arguments and debates are founded in legal and psychological practice
Document 2:::
Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.
Response bias can be induced or caused by numerous factors, all relating to the idea that human subjects do not respond passively to stimuli, but rather actively integrate multiple sources of information to generate a response in a given situation. Because of this, almost any aspect of an experimental condition may potentially bias a respondent. Examples include the phrasing of questions in surveys, the demeanor of the researcher, the way the experiment is conducted, or the desires of the participant to be a good experimental subject and to provide socially desirable responses may affect the response in some way. All of these "artifacts" of survey and self-report research may have the potential to damage the validity of a measure or study. Compounding this issue is that surveys affected by response bias still often have high reliability, which can lure researchers into a false sense of security about the conclusions they draw.
Because of response bias, it is possible that some study results are due to a systematic response bias rather than the hypothesized effect, which can have a profound effect on psychological and other types of research using questionnaires or surveys. It is therefore important for researchers to be aware of response bias and the effect it can have on their research so that they can attempt to prevent it from impacting their findings in a negative manner.
History of research
Awareness of response bias has been present in psychology and sociology literature for some time because self-reporting features significantly in those fields of research. However, researchers were initially unwilling to admit the degree to which
Document 3:::
A wide range of research methods are used in psychology. These methods vary by the sources from which information is obtained, how that information is sampled, and the types of instruments that are used in data collection. Methods also vary by whether they collect qualitative data, quantitative data or both.
Qualitative psychological research findings are not arrived at by statistical or other quantitative procedures. Quantitative psychological research findings result from mathematical modeling and statistical estimation or statistical inference. The two types of research differ in the methods employed, rather than the topics they focus on.
There are three main types of psychological research:
Correlational research
Descriptive research
Experimental research
Common methods
Common research designs and data collection methods include:
Archival research
Case study uses different research methods (e.g. interview, observation, self-report questionnaire) with a single case or small number of cases.
Computer simulation (modeling)
Ethnography
Event sampling methodology, also referred to as experience sampling methodology, diary study, or ecological momentary assessment
Experiment, often with separate treatment and control groups (see scientific control and design of experiments). See Experimental psychology for many details.
Field experiment
Focus group
Interview, can be structured or unstructured.
Meta-analysis
Neuroimaging and other psychophysiological methods
Observational study, can be naturalistic (see natural experiment), participant or controlled.
Program evaluation
Quasi-experiment
Self-report inventory
Survey, often with a random sample (see survey sampling)
Twin study
Research designs vary according to the period(s) of time over which data are collected:
Retrospective cohort study: Participants are chosen, then data are collected about their past experiences.
Prospective cohort study: Participants are recruited prior to the proposed
Document 4:::
Sex as a biological variable (SABV) is a research policy recognizing sex as an important variable to consider when designing studies and assessing results. Research including SABV has strengthened the rigor and reproducibility of findings. Public research institutions including the European Commission, Canadian Institutes of Health Research, and the U.S. National Institutes of Health have instituted SABV policies. Editorial policies were established by various scientific journals recognizing the importance and requiring research to consider SABV.
Background
Public research institutions
In 1999, the Institute Of Medicine established a committee on understanding the biology of sex and gender differences. In 2001, they presented a report that sex is an important variable in designing studies and assessing results. The quality and generalizability of biomedical research depends on the consideration of key biological variables, such as sex. To improve the rigor and reproducibility of research findings, the European Commission, Canadian Institutes of Health Research, and the U.S. National Institutes of Health (NIH) established policies on sex as a biological variable (SABV). Enrolling both men and women in clinical trials can impact the application of results and permit the identification of factors that affect the course of disease and the outcome of treatment.
In 2003, the European Commission (EC) began influencing investigators to include sex and gender in their research methodologies. The Canadian Institutes of Health Research (CIHR) requires four approaches: sex and gender integration in research proposals, sex and gender expertise among research teams, sex and gender platform in large consortiums, and starting in September 2015, the completion of sex and gender online training programs.
In May 2014, the NIH announced the formation of SABV policy. The policy came into effect in 2015 which specified that "SABV is frequently ignored in animal study designs and an
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A scientist investigated the effect of workplace stress on heart disease in humans. Men of various ages were divided into two groups based on whether they described their work as very stressful or not very stressful. During the one year investigation the scientist monitored the heart health of each man. What was the bias in this investigation?
A. The investigation only lasted one year.
B. The only organ studied was the heart.
C. The investigation tested only men.
D. The age of the participants varied.
Answer:
|
|
sciq-7394
|
multiple_choice
|
Mineralocorticoids are hormones synthesized by the adrenal cortex that affect what balance, by regulating sodium and water levels?
|
[
"equilibrium",
"homeostasis",
"blood pressure",
"osmotic"
] |
D
|
Relavent Documents:
Document 0:::
The mineralocorticoid receptor (or MR, MLR, MCR), also known as the aldosterone receptor or nuclear receptor subfamily 3, group C, member 2, (NR3C2) is a protein that in humans is encoded by the NR3C2 gene that is located on chromosome 4q31.1-31.2.
MR is a receptor with equal affinity for mineralocorticoids and glucocorticoids. It belongs to the nuclear receptor family where the ligand diffuses into cells, interacts with the receptor and results in a signal transduction affecting specific gene expression in the nucleus. The selective response of some tissues and organs to mineralocorticoids over glucocorticoids occurs because mineralocorticoid-responsive cells express Corticosteroid 11-beta-dehydrogenase isozyme 2, an enzyme which selectively inactivates glucocorticoids more readily than mineralocorticoids.
Function
MR is expressed in many tissues, such as the kidney, colon, heart, central nervous system (hippocampus), brown adipose tissue and sweat glands. In epithelial tissues, its activation leads to the expression of proteins regulating ionic and water transports (mainly the epithelial sodium channel or ENaC, Na+/K+ pump, serum and glucocorticoid induced kinase or SGK1) resulting in the reabsorption of sodium, and as a consequence an increase in extracellular volume, increase in blood pressure, and an excretion of potassium to maintain a normal salt concentration in the body.
The receptor is activated by mineralocorticoids such as aldosterone and its precursor deoxycorticosterone as well as glucocorticoids like cortisol. In intact animals, the mineralocorticoid receptor is "protected" from glucocorticoids by co-localization of an enzyme, corticosteroid 11-beta-dehydrogenase isozyme 2 (a.k.a. 11β-hydroxysteroid dehydrogenase 2; 11β-HSD2), that converts cortisol to inactive cortisone.
Activation of the mineralocorticoid receptor, upon the binding of its ligand aldosterone, results in its translocation to the cell nucleus, homodimerization and binding to horm
Document 1:::
The following outline is provided as an overview of and topical guide to neuroscience:
Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another.
Branches of neuroscience
Neurophysiology
Neurophysiology is the study of the function (as opposed to structure) of the nervous system.
Brain mapping
Electrophysiology
Extracellular recording
Intracellular recording
Brain stimulation
Electroencephalography
Intermittent rhythmic delta activity
:Category: Neurophysiology
:Category: Neuroendocrinology
:Neuroendocrinology
Neuroanatomy
Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system.
Immunostaining
:Category: Neuroanatomy
Neuropharmacology
Neuropharmacology is the study of how drugs affect cellular function in the nervous system.
Drug
Psychoactive drug
Anaesthetic
Narcotic
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals.
Neuroethology
Developmental neuroscience
Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop.
Aging and memory
Cognitive neuroscience
Cognitive ne
Document 2:::
In physiology, aldosterone escape is a term that has been used to refer to two distinct phenomena involving aldosterone that are exactly opposite each other:
Escape from the sodium-retaining effects of excess aldosterone (or other mineralocorticoids) in primary hyperaldosteronism, manifested by volume and/or pressure natriuresis.
The inability of ACE inhibitor therapy to reliably suppress aldosterone release, for example, in patients with heart failure or diabetes, usually manifested by increased salt and water retention. This latter sense may rather be termed refractory hyperaldosteronism.
In patients with hyperaldosteronism, chronic exposure to excess aldosterone does not cause edema as might be expected. Aldosterone initially results in an increase in Na+ reabsorption in these patients through stimulation of ENaC channels in principal cells of the renal collecting tubules. Increased ENaC channels situated in the apical membranes of the principal cells allow for more Na+ reabsorption, which may cause a transient increase in fluid reabsorption as well. However, within a few days, Na+ reabsorption returns to normal as evidenced by normal urinary Na+ levels in these patients.
The proposed mechanism for this phenomenon does not include a reduced sensitivity of mineralocorticoid receptors to aldosterone, because low serum potassium is often seen in these patients, which is the direct result of aldosterone-induced expression of ENaC channels. Furthermore, electrolyte homeostasis is maintained in these patients, which excludes the possibility that other Na+ transporters elsewhere in the kidney are being shut down. If, in fact, other transporters such as the Na+-H+ antiporter in the proximal tubule or the Na+/K+/2Cl− symporter in the thick ascending loop of Henle were being blocked, other electrolyte disturbances would be expected, such as seen during use of diuretics.
Instead, experiments isolating the perfusion pressures seen by glomerular capillaries from heighten
Document 3:::
In the human endocrine system, a spongiocyte is a cell in the zona fasciculata of the adrenal cortex containing lipid droplets that show pronounced vacuolization, due to the way the cells are prepared for microscopic examination.
The lipid droplets contain neutral fats, fatty acids, cholesterol, and phospholipids; all of which are precursors to the steroid hormones secreted by the adrenal glands. The principal hormone secreted from the cells of the zona fasciculata are glucocorticoids, but some androgens are produced as well.
Document 4:::
The Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology is a monthly peer-reviewed scientific journal covering the intersection of ethology, neuroscience, and physiology. It was established in 1984, when it was split off from the Journal of Comparative Physiology. It was originally subtitled the Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology, obtaining its current name in 2001. The editor-in-chief is Friedrich G. Barth (University of Vienna). The journal become electronic only in 2017.
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.970.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Mineralocorticoids are hormones synthesized by the adrenal cortex that affect what balance, by regulating sodium and water levels?
A. equilibrium
B. homeostasis
C. blood pressure
D. osmotic
Answer:
|
|
ai2_arc-880
|
multiple_choice
|
Which behavior of a dog is the best example of a learned behavior?
|
[
"barking",
"tail-wagging",
"digging a hole",
"coming when called"
] |
D
|
Relavent Documents:
Document 0:::
Dog intelligence or dog cognition is the process in dogs of acquiring information and conceptual skills, and storing them in memory, retrieving, combining and comparing them, and using them in new situations.
Studies have shown that dogs display many behaviors associated with intelligence. They have advanced memory skills, and are able to read and react appropriately to human body language such as gesturing and pointing, and to understand human voice commands. Dogs demonstrate a theory of mind by engaging in deception.
Evolutionary perspective
Dogs have often been used in studies of cognition, including research on perception, awareness, memory, and learning, notably research on classical and operant conditioning. In the course of this research, behavioral scientists uncovered a surprising set of social-cognitive abilities in the domestic dog, abilities that are neither possessed by dogs' closest canine relatives nor by other highly intelligent mammals such as great apes. Rather, these skills resemble some of the social-cognitive skills of human children. This may be an example of convergent evolution, which happens when distantly related species independently evolve similar solutions to the same problems. For example, fish, penguins and dolphins have each separately evolved flippers as solution to the problem of moving through the water. With dogs and humans, we may see psychological convergence; that is, dogs have evolved to be cognitively more similar to humans than we are to our closest genetic relatives.
However, it is questionable whether the cognitive evolution of humans and animals may be called "independent". The cognitive capacities of dogs have inevitably been shaped by millennia of contact with humans. As a result of this physical and social evolution, many dogs readily respond to social cues common to humans, quickly learn the meaning of words, show cognitive bias and exhibit emotions that seem to reflect those of humans.
Research suggests that dom
Document 1:::
Dog behavior is the internally coordinated responses of individuals or groups of domestic dogs to internal and external stimuli. It has been shaped by millennia of contact with humans and their lifestyles. As a result of this physical and social evolution, dogs have acquired the ability to understand and communicate with humans. Behavioral scientists have uncovered a wide range of social-cognitive abilities in domestic dogs.
Co-evolution with humans
The origin of the domestic dog (Canis familiaris) is not clear. Whole-genome sequencing indicates that the dog, the gray wolf and the extinct Taymyr wolf diverged around the same time 27,000–40,000 years ago. How dogs became domesticated is not clear, however the two main hypotheses are self-domestication or human domestication. There exists evidence of human-canine behavioral coevolution.
Intelligence
Dog intelligence is the ability of the dog to perceive information and retain it as knowledge in order to solve problems. Dogs have been shown to learn by inference. A study with Rico showed that he knew the labels of over 200 different items. He inferred the names of novel items by exclusion learning and correctly retrieved those novel items immediately. He also retained this ability four weeks after the initial exposure. Dogs have advanced memory skills. A study documented the learning and memory capabilities of a border collie, "Chaser", who had learned the names and could associate by verbal command over 1,000 words. Dogs are able to read and react appropriately to human body language such as gesturing and pointing, and to understand human voice commands. After undergoing training to solve a simple manipulation task, dogs that are faced with an insolvable version of the same problem look at the human, while socialized wolves do not. Dogs demonstrate a theory of mind by engaging in deception.
Senses
The dog's senses include vision, hearing, sense of smell, taste, touch, proprioception, and sensitivity to the Ear
Document 2:::
Social learning refers to learning that is facilitated by observation of, or interaction with, another animal or its products. Social learning has been observed in a variety of animal taxa, such as insects, fish, birds, reptiles, amphibians and mammals (including primates).
Social learning is fundamentally different from individual learning, or asocial learning, which involves learning the appropriate responses to an environment through experience and trial and error. Though asocial learning may result in the acquisition of reliable information, it is often costly for the individual to obtain. Therefore, individuals that are able to capitalize on other individuals' self-acquired information may experience a fitness benefit. However, because social learning relies on the actions of others rather than direct contact, it can be unreliable. This is especially true in variable environments, where appropriate behaviors may change frequently. Consequently, social learning is most beneficial in stable environments, in which predators, food, and other stimuli are not likely to change rapidly.
When social learning is actively facilitated by an experienced individual, it is classified as teaching. Mechanisms of inadvertent social learning relate primarily to psychological processes in the observer, whereas teaching processes relate specifically to activities of the demonstrator. Studying the mechanisms of information transmission allows researchers to better understand how animals make decisions by observing others' behaviors and obtaining information.
Social learning mechanisms
Social learning occurs when one individual influences the learning of another through various processes. In local enhancement and opportunity providing, the attention of an individual is drawn to a specific location or situation. In stimulus enhancement, emulation, observational conditioning, the observer learns the relationship between a stimulus and a result but does not directly copy the behavio
Document 3:::
Intelligent disobedience occurs where a service animal trained to help a disabled person goes directly against the owner's instructions in an effort to make a better decision. This behavior is a part of the dog's training and is central to a service animal's success on the job. The concept of intelligent disobedience has been in use and a common part of service animals' training since at least 1936.
Examples
When a blind person wishes to cross a street and issues an instruction to the assistance dog to do so, the dog should refuse to move when such an action would put the person in harm's way. The animal understands that this contradicts the learned behavior to respond to the owner's instructions: instead it makes an alternative decision because the human is not in a position to decide safely. The dog in this case has the capacity to understand that it is performing such an action for the welfare of the person.
In another example, a blind person must communicate with the animal in such a way that the animal can recognize that the person is aware of the surroundings and can safely proceed. If a blind person wishes to descend a staircase, an animal properly trained to exhibit intelligent disobedience will refuse to move unless the person issues a specific code word or command that lets the animal know the person is aware they are about to descend stairs. This command will be specific for staircases, and the animal will not attribute it to stepping off a curb or up onto a sidewalk or stoop. In a similar circumstance, if the person believes they are in front of a step and they wish to go down, but they are in fact standing in front of a dangerous precipice (for example, a loading dock or cliff), the animal will refuse to proceed.
Application to other fields
Ira Chaleff suggests in his 2015 book Intelligent Disobedience: Doing Right When What You're Told to Do is Wrong that intelligent disobedience has a place in other important areas. One notable example is crew re
Document 4:::
A dog behaviourist is a person who works in modifying or changing behaviour in dogs. They can be experienced dog handlers, who have developed their experience over many years of hands-on experience, or have formal training up to degree level. Some have backgrounds in veterinary science, animal science, zoology, sociology, biology, or animal behaviour, and have applied their experience and knowledge to the interaction between humans and dogs. Professional certification may be offered through either industry associations or local educational institutions. There is however no compulsion for behaviourists to be a member of a professional body nor to take formal training.
Overview
While any person who works to modify a dog's behaviour might be considered a dog behaviourist in the broadest sense of the term, an animal behaviourist, is a title given only to individuals who have obtained relevant professional qualifications. The professional fields and course of study for dog behaviourists include, but are not limited to animal science, zoology, sociology, biology, psychology, ethology, and veterinary science. People with these credentials usually refer to themselves as Clinical Animal Behaviourists, Applied Animal Behaviourists (PhD) or Veterinary behaviourists (veterinary degree). If they limit their practice to a particular species, they might refer to themselves as a dog/cat/bird behaviourist.
While there are many dog trainers who work with behavioural issues, there are relatively few qualified dog behaviourists. For the majority of the general public, the cost of the services of a dog behaviourist usually reflects both the supply/demand inequity, as well as the level of training they have obtained.
Some behaviourists can be identified in the U.S. by the post-nominals "CAAB", indicating that they are a Certified Applied Animal behaviourist (which requires a Ph.D. or veterinary degree), or, "DACVB", indicating that they are a diplomate of the American College of Vet
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which behavior of a dog is the best example of a learned behavior?
A. barking
B. tail-wagging
C. digging a hole
D. coming when called
Answer:
|
|
scienceQA-10073
|
multiple_choice
|
What do these two changes have in common?
cutting an apple
breaking a rock in half
|
[
"Both are caused by heating.",
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by cooling."
] |
B
|
Step 1: Think about each change.
Cutting an apple is a physical change. The apple gets a different shape. But it is still made of the same type of matter as the uncut apple.
Breaking a rock in half is a physical change. The rock gets broken into two pieces. But the pieces are still made of the same type of matter as the original rock.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 3:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 4:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
cutting an apple
breaking a rock in half
A. Both are caused by heating.
B. Both are only physical changes.
C. Both are chemical changes.
D. Both are caused by cooling.
Answer:
|
sciq-4570
|
multiple_choice
|
Where do benthos live in oceans?
|
[
"In deep water",
"In coral reefs",
"on the ocean floor",
"On the ocean surface"
] |
C
|
Relavent Documents:
Document 0:::
Benthos (), also known as benthon, is the community of organisms that live on, in, or near the bottom of a sea, river, lake, or stream, also known as the benthic zone. This community lives in or near marine or freshwater sedimentary environments, from tidal pools along the foreshore, out to the continental shelf, and then down to the abyssal depths.
Many organisms adapted to deep-water pressure cannot survive in the upper parts of the water column. The pressure difference can be very significant (approximately one atmosphere for every 10 metres of water depth).
Because light is absorbed before it can reach deep ocean water, the energy source for deep benthic ecosystems is often organic matter from higher up in the water column that drifts down to the depths. This dead and decaying matter sustains the benthic food chain; most organisms in the benthic zone are scavengers or detritivores.
The term benthos, coined by Haeckel in 1891, comes from the Greek noun 'depth of the sea'. Benthos is used in freshwater biology to refer to organisms at the bottom of freshwater bodies of water, such as lakes, rivers, and streams. There is also a redundant synonym, Benton.
Overview
Compared to the relatively featureless pelagic zone, the benthic zone offers physically diverse habitats. There is a huge range in how much light and warmth is available, and in the depth of water or extent of intertidal immersion. The seafloor varies widely in the types of sediment it offers. Burrowing animals can find protection and food in soft, loose sediments such as mud, clay and sand. Sessile species such as oysters and barnacles can attach themselves securely to hard, rocky substrates. As adults they can remain at the same site, shaping depressions and crevices where mobile animals find refuge. This greater diversity in benthic habitats has resulted in a higher diversity of benthic species. The number of benthic animal species exceeds one million. This far exceeds the number of pelagic animal
Document 1:::
The Antarctic Benthic Deep-Sea Biodiversity Project (ANDEEP) is an international project to investigate deep-water biology of the Scotia and Weddell seas. Benthic refers to "bottom-dwelling" organisms that are known to exhibit unusual characteristics not normally seen in shallow-dwelling creatures. ANDEEP has already made many notable discoveries, such as animals with gigantism and extraordinary longevity.
Document 2:::
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers
Document 3:::
The Oregon Institute of Marine Biology (or OIMB) is the marine station of the University of Oregon. This marine station is located in Charleston, Oregon at the mouth of Coos Bay. Currently, OIMB is home to several permanent faculty members and a number of graduate students. OIMB is a member of the National Association of Marine Laboratories (NAML). In addition to graduate research, undergraduate classes are offered year round, including marine birds and mammals, estuarine biology, marine ecology, invertebrate zoology, molecular biology, biology of fishes, biological oceanography, and embryology.
The Loyd and Dorothy Rippey Library, one of eight branches of the UO Libraries, was added to the campus in 1999. The Rippey Library is open to the public by appointment, and the Oregon Card Program allows Oregon residents 16 years old and over to borrow from the collection.
The Charleston Marine Life Center (or CMLC) is a public museum and aquarium on the edge of the harbor in Charleston, OR, across the street from the OIMB campus. Displays aimed at visitors of all ages emphasize the diversity of animal and plant life in local marine ecosystems. Visitors learn where to interact with marine organisms in their natural environments and how local scientists study the life histories, evolution and ecology of underwater plants and animals.
History
The University of Oregon first established OIMB as a summer research and education program in 1924, operating out of tents along the beach of Sunset Bay. OIMB settled into its current location in 1931, when 100 acres of the Coos Head Military Reserve, including several buildings from the Army Corps of Engineers, was deeded to the University of Oregon. In 1937, OIMB was transferred to Oregon State College (now Oregon State University), and remained theirs until the federal government required the property during World War II. Following the war, OIMB was initially returned to Oregon State University, but the University of Oregon r
Document 4:::
The Oyster Question: Scientists, Watermen, and the Maryland Chesapeake Bay since 1880 is a 2009 book by Christine Keiner. It examines the conflict between oystermen and scientists in the Chesapeake Bay from the end of the nineteenth century to the present, which includes the period of the so-called "Oyster Wars" and the precipitous decline of the oyster industry at the end of the twentieth century. The book engages the myth of the "Tragedy of the Commons" by examining the often fraught relationship between local politics and conservation science, arguing that for most of the period Maryland's state political system gave rural oystermen more political clout than politicians and the scientists they appointed and allowing oystermen to effectively manage the oyster bed commons. Only towards the end of the twentieth century did reapportionment bring suburban and urban interests more political power, by which time they had latched on to oystermen as elements of the area's heritage and incorporated them and the oysters into broader conservation efforts. An important theme is the "intersection[] of scientific knowledge with experiential knowledge in the context of use," in that Keiner "treats the knowledge of the Chesapeake Bay’s oystermen alongside that of biologists." "Through her analysis, Keiner effectively reframes how environmental historians have analyzed histories of common resources and provides a working model for integrating historical and ecological information to bridge the histories of science and environmental history."
Awards
The book won the 2010 Forum for the History of Science in America Prize. It shared the 2010 Maryland Historical Trust's Heritage Book Award, and received an Honorable Mention for the Frederick Jackson Turner Award from the Organization of American Historians in 2010.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do benthos live in oceans?
A. In deep water
B. In coral reefs
C. on the ocean floor
D. On the ocean surface
Answer:
|
|
sciq-7949
|
multiple_choice
|
One of the five fundamental conservation laws in the universe refers to conservation of what, which is the product of mass and velocity?
|
[
"momentum",
"energy",
"fluid",
"light"
] |
A
|
Relavent Documents:
Document 0:::
In physics, a number of noted theories of the motion of objects have developed. Among the best known are:
Classical mechanics
Newton's laws of motion
Euler's laws of motion
Cauchy's equations of motion
Kepler's laws of planetary motion
General relativity
Special relativity
Quantum mechanics
Motion (physics)
Document 1:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 2:::
In mechanics, a variable-mass system is a collection of matter whose mass varies with time. It can be confusing to try to apply Newton's second law of motion directly to such a system. Instead, the time dependence of the mass m can be calculated by rearranging Newton's second law and adding a term to account for the momentum carried by mass entering or leaving the system. The general equation of variable-mass motion is written as
where Fext is the net external force on the body, vrel is the relative velocity of the escaping or incoming mass with respect to the center of mass of the body, and v is the velocity of the body. In astrodynamics, which deals with the mechanics of rockets, the term vrel is often called the effective exhaust velocity and denoted ve.
Derivation
There are different derivations for the variable-mass system motion equation, depending on whether the mass is entering or leaving a body (in other words, whether the moving body's mass is increasing or decreasing, respectively). To simplify calculations, all bodies are considered as particles. It is also assumed that the mass is unable to apply external forces on the body outside of accretion/ablation events.
Mass accretion
The following derivation is for a body that is gaining mass (accretion). A body of time-varying mass m moves at a velocity v at an initial time t. In the same instant, a particle of mass dm moves with velocity u with respect to ground. The initial momentum can be written as
Now at a time t + dt, let both the main body and the particle accrete into a body of velocity v + dv. Thus the new momentum of the system can be written as
Since dmdv is the product of two small values, it can be ignored, meaning during dt the momentum of the system varies for
Therefore, by Newton's second law
Noting that u - v is the velocity of dm relative to m, symbolized as vrel, this final equation can be arranged as
Mass ablation/ejection
In a system where mass is being ejected or ablated
Document 3:::
A conserved quantity is a property or value that remains constant over time in a system even when changes occur in the system. In mathematics, a conserved quantity of a dynamical system is formally defined as a function of the dependent variables, the value of which remains constant along each trajectory of the system.
Not all systems have conserved quantities, and conserved quantities are not unique, since one can always produce another such quantity by applying a suitable function, such as adding a constant, to a conserved quantity.
Since many laws of physics express some kind of conservation, conserved quantities commonly exist in mathematical models of physical systems. For example, any classical mechanics model will have mechanical energy as a conserved quantity as long as the forces involved are conservative.
Differential equations
For a first order system of differential equations
where bold indicates vector quantities, a scalar-valued function H(r) is a conserved quantity of the system if, for all time and initial conditions in some specific domain,
Note that by using the multivariate chain rule,
so that the definition may be written as
which contains information specific to the system and can be helpful in finding conserved quantities, or establishing whether or not a conserved quantity exists.
Hamiltonian mechanics
For a system defined by the Hamiltonian , a function f of the generalized coordinates q and generalized momenta p has time evolution
and hence is conserved if and only if . Here denotes the Poisson bracket.
Lagrangian mechanics
Suppose a system is defined by the Lagrangian L with generalized coordinates q. If L has no explicit time dependence (so ), then the energy E defined by
is conserved.
Furthermore, if , then q is said to be a cyclic coordinate and the generalized momentum p defined by
is conserved. This may be derived by using the Euler–Lagrange equations.
See also
Conservative system
Lyapunov function
Hamiltonian sy
Document 4:::
<noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
One of the five fundamental conservation laws in the universe refers to conservation of what, which is the product of mass and velocity?
A. momentum
B. energy
C. fluid
D. light
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.