id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-915
|
multiple_choice
|
What type of charge does a neutron have?
|
[
"half charge",
"positive charge",
"neutral or no charge",
"negative charge"
] |
C
|
Relavent Documents:
Document 0:::
In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, which are all believed to have the same charge (except antimatter). Another charged particle may be an atomic nucleus devoid of electrons, such as an alpha particle.
A plasma is a collection of charged particles, atomic nuclei and separated electrons, but can also be a gas containing a significant proportion of charged particles.
Charged particles are labeled as either positive (+) or negative (-). Only the existence of two "types" of charges are known, and the designations themselves are arbitrarily named. Nothing is inherent to a positively charged particle that makes it "positive", and the same goes for negatively charged particles.
Examples
Positively charged particles
protons and atomic nuclei
positrons (antielectrons)
alpha particles
positive charged pions
cations
Negatively charged particles
electrons
antiprotons
muons
tauons
negative charged pions
anions
Particles without an electric charge
neutrons
photons
neutrinos
neutral pions
z boson
higgs boson
atoms
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of
Document 3:::
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry.
Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge.
Elementary definition
Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition.
A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th
Document 4:::
The electric form factor is the Fourier transform of electric charge distribution in a nucleon. Nucleons (protons and neutrons) are made of up and down quarks which have charges associated with them (2/3 & -1/3, respectively). The study of Form Factors falls within the regime of Perturbative QCD.
The idea originated from young William Thomson.
See also
Form factor (disambiguation)
Electrodynamics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of charge does a neutron have?
A. half charge
B. positive charge
C. neutral or no charge
D. negative charge
Answer:
|
|
sciq-7161
|
multiple_choice
|
In humans, the first sites used for energy storage are liver and what else?
|
[
"reproductive organs",
"muscle cells",
"lungs",
"skin cells"
] |
B
|
Relavent Documents:
Document 0:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
The New Research Building (or NRB for short) is the largest building ever built by Harvard University, and was dedicated on September 24, 2003 by the then president of Harvard University, Lawrence H. Summers and the dean of the Harvard Medical School, Joseph Martin.
It is an integrated biomedical research facility, located at 77 Avenue Louis Pasteur, Boston, Massachusetts, at the Longwood Medical Area and has of space. It cost $260 million to build, and houses more than 800 researchers, and many more graduate students, lab assistants, and staff workers. The building sits across the street from the Boston Latin School on the former site of Boston English High School.
It constitutes the largest expansion of Harvard Medical school witnessed in the last 100 years. It houses the Department of Genetics of the Harvard Medical School, among many other centers and institutes it houses. It is also home to many meetings, and has a 500-seat auditorium.
The architects were Architectural Resources Cambridge, Inc. (ARC) who are active in the Boston/Cambridge area and have built other educational and research facilities.
Document 4:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In humans, the first sites used for energy storage are liver and what else?
A. reproductive organs
B. muscle cells
C. lungs
D. skin cells
Answer:
|
|
sciq-9749
|
multiple_choice
|
Does the stream-lined body featured on most fish increase or decrease water resistance?
|
[
"precipitates",
"increases",
"quickens",
"decrease"
] |
D
|
Relavent Documents:
Document 0:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons.
The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece.
Influence on stream flow around bends
Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction.
See also
Beaver dam
Coarse woody debris
Driftwood
Log jam
Stream restoration
Document 3:::
Hydraulic roughness is the measure of the amount of frictional resistance water experiences when passing over land and channel features.
One roughness coefficient is Manning's n-value. Manning's n is used extensively around the world to predict the degree of roughness in channels. Flow velocity is strongly dependent on the resistance to flow. An increase in this n value will cause a decrease in the velocity of water flowing across a surface.
Manning's n
The value of Manning's n is affected by many variables. Factors like suspended load, sediment grain size, presence of bedrock or boulders in the stream channel, variations in channel width and depth, and overall sinuosity of the stream channel can all affect Manning's n value. Biological factors have the greatest overall effect on Manning's n; bank stabilization by vegetation, height of grass and brush across a floodplain, and stumps and logs creating natural dams are the main observable influences.
Biological Importance
Recent studies have found a relationship between hydraulic roughness and salmon spawning habitat; “bed-surface grain size is responsive to hydraulic roughness caused by bank irregularities, bars, and wood debris… We find that wood debris plays an important role at our study sites, not only providing hydraulic roughness but also influencing pool spacing, frequency of textural patches, and the amplitude and wavelength of bank and bar topography and their consequent roughness. Channels with progressively greater hydraulic roughness have systematically finer bed surfaces, presumably due to reduced bed shear stress, resulting in lower channel competence and diminished bed load transport capacity, both of which promote textural fining”. Textural fining of stream beds can effect more than just salmon spawning habitats, “bar and wood roughness create a greater variety of textural patches, offering a range of aquatic habitats that may promote biologic diversity or be of use to specific animals at differe
Document 4:::
A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts.
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship.
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Does the stream-lined body featured on most fish increase or decrease water resistance?
A. precipitates
B. increases
C. quickens
D. decrease
Answer:
|
|
ai2_arc-1092
|
multiple_choice
|
In order for students to perform lab experiments safely and accurately, they should
|
[
"copy what the other students are doing.",
"ask the teacher to first demonstrate the entire experiment.",
"perform the experiment after memorizing the instructions.",
"read and understand all directions before starting the experiment."
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In order for students to perform lab experiments safely and accurately, they should
A. copy what the other students are doing.
B. ask the teacher to first demonstrate the entire experiment.
C. perform the experiment after memorizing the instructions.
D. read and understand all directions before starting the experiment.
Answer:
|
|
sciq-3694
|
multiple_choice
|
What is a shortage of water that causes the soil to dry from the surface down called?
|
[
"drought",
"flood",
"tidal wave",
"overflowage"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Surface runoff (also known as overland flow or terrestrial runoff) is the unconfined flow of water over the ground surface, in contrast to channel runoff (or stream flow). It occurs when excess rainwater, stormwater, meltwater, or other sources, can no longer sufficiently rapidly infiltrate in the soil. This can occur when the soil is saturated by water to its full capacity, and the rain arrives more quickly than the soil can absorb it. Surface runoff often occurs because impervious areas (such as roofs and pavement) do not allow water to soak into the ground. Furthermore, runoff can occur either through natural or human-made processes.
Surface runoff is a major component of the water cycle. It is the primary agent of soil erosion by water. The land area producing runoff that drains to a common point is called a drainage basin.
Runoff that occurs on the ground surface before reaching a channel can be a nonpoint source of pollution, as it can carry human-made contaminants or natural forms of pollution (such as rotting leaves). Human-made contaminants in runoff include petroleum, pesticides, fertilizers and others. Much agricultural pollution is exacerbated by surface runoff, leading to a number of down stream impacts, including nutrient pollution that causes eutrophication.
In addition to causing water erosion and pollution, surface runoff in urban areas is a primary cause of urban flooding, which can result in property damage, damp and mold in basements, and street flooding.
Generation
Surface runoff is defined as precipitation (rain, snow, sleet, or hail) that reaches a surface stream without ever passing below the soil surface. It is distinct from direct runoff, which is runoff that reaches surface streams immediately after rainfall or melting snowfall and excludes runoff generated by the melting of snowpack or glaciers.
Snow and glacier melt occur only in areas cold enough for these to form permanently. Typically snowmelt will peak in the spring and glacie
Document 2:::
Field capacity is the amount of soil moisture or water content held in the soil after excess water has drained away and the rate of downward movement has decreased. This usually takes place 2–3 days after rain or irrigation in pervious soils of uniform structure and texture. The physical definition of field capacity (expressed symbolically as θfc) is the bulk water content retained in soil at −33 kPa (or −0.33 bar) of hydraulic head or suction pressure. The term originated from Israelsen and West and Frank Veihmeyer and Arthur Hendrickson.
Veihmeyer and Hendrickson realized the limitation in this measurement and commented that it is affected by so many factors that, precisely, it is not a constant (for a particular soil), yet it does serve as a practical measure of soil water-holding capacity. Field capacity improves on the concept of moisture equivalent by Lyman Briggs. Veihmeyer & Hendrickson proposed this concept as an attempt to improve water-use efficiency for farmers in California during 1949.
Field capacity is characterized by measuring water content after wetting a soil profile, covering it (to prevent evaporation) and monitoring the change soil moisture in the profile. Water content when the rate of change is relatively small is indicative of when drainage ceases and is called Field Capacity, it is also termed drained upper limit (DUL).
Lorenzo A. Richards and Weaver found that water content held by soil at a potential of −33 kPa (or −0.33 bar) correlate closely with field capacity (−10 kPa for sandy soils).
Criticism
There is also criticism of this concept; field capacity is a static measurement: in a field it depends upon the initial water content and the depth of wetting before the commencement of redistribution and the rate of change in water content over time. These conditions are not unique for a given soil.
See also
Available water capacity
Integral energy
Nonlimiting water range
Pedotransfer function
Permanent wilting point
Water poten
Document 3:::
Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management.
Definition of evapotranspiration
Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are:
Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed.
Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices.
Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration.
Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions.
Factors that impact evapotranspiration levels
Primary factors
Because evaporation and transpiration
Document 4:::
In hydrology, run-on refers to surface runoff from an external area that flows on to an area of interest. A portion of run-on can infiltrate once it reaches the area of interest. Run-on is common in arid and semi-arid areas with patchy vegetation cover and short but intense thunderstorms. In these environments, surface runoff is usually generated by a failure of rainfall to infiltrate into the ground quickly enough (this runoff is termed infiltration excess overland flow). This is more likely to occur on bare soil, with low infiltration capacity. As runoff flows downslope, it may run-on to ground with higher infiltration capacity (such as beneath vegetation) and then infiltrate.
Run-on is an important process in the hydrological and ecohydrological behaviour of semi-arid ecosystems. Tiger bush is an example of a vegetation community that develops a patterned structure in response to, in part, the generation of runoff and run-on.
See also
Stormwater
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a shortage of water that causes the soil to dry from the surface down called?
A. drought
B. flood
C. tidal wave
D. overflowage
Answer:
|
|
sciq-8215
|
multiple_choice
|
Another measure of the effectiveness of a machine is its what?
|
[
"mechanical advantage",
"cost benefit",
"aesthetic effect",
"chemical advantage"
] |
A
|
Relavent Documents:
Document 0:::
In mechanical engineering, mechanical efficiency is a dimensionless ratio that measures the efficiency of a mechanism or machine in transforming the power input to the device to power output. A machine is a mechanical linkage in which force is applied at one point, and the force does work moving a load at another point. At any instant the power input to a machine is equal to the input force multiplied by the velocity of the input point, similarly the power output is equal to the force exerted on the load multiplied by the velocity of the load. The mechanical efficiency of a machine (often represented by the Greek letter eta ) is a dimensionless number between 0 and 1 that is the ratio between the power output of the machine and the power input
Since a machine does not contain a source of energy, nor can it store energy, from conservation of energy the power output of a machine can never be greater than its input, so the efficiency can never be greater than 1.
All real machines lose energy to friction; the energy is dissipated as heat. Therefore, their power output is less than their power input
Therefore, the efficiency of all real machines is less than 1. A hypothetical machine without friction is called an ideal machine; such a machine would not have any energy losses, so its output power would equal its input power, and its efficiency would be 1 (100%).
For hydropower turbines the efficiency is referred to as hydraulic efficiency.
See also
Mechanical advantage
Thermal efficiency
Electrical efficiency
Internal combustion engine
Electric motor
Velocity ratio
Document 1:::
The term ideal machine refers to a hypothetical mechanical system in which energy and power are not lost or dissipated through friction, deformation, wear, or other inefficiencies. Ideal machines have the theoretical maximum performance, and therefore are used as a baseline for evaluating the performance of real machine systems.
A simple machine, such as a lever, pulley, or gear train, is "ideal" if the power input is equal to the power output of the device, which means there are no losses. In this case, the mechanical efficiency is 100%.
Mechanical efficiency is the performance of the machine compared to its theoretical maximum as performed by an ideal machine. The mechanical efficiency of a simple machine is calculated by dividing the actual power output by the ideal power output. This is usually expressed as a percentage.
Power loss in a real system can occur in many ways, such as through friction, deformation, wear, heat losses, incomplete chemical conversion, magnetic and electrical losses.
Criteria
A machine consists of a power source and a mechanism for the controlled use of this power. The power source often relies on chemical conversion to generate heat which is then used to generate power. Each stage of the process of power generation has a maximum performance limit which is identified as ideal.
Once the power is generated the mechanism components of the machine direct it toward useful forces and movement. The ideal mechanism does not absorb any power, which means the power input is equal to the power output.
An example is the automobile engine (internal combustion engine) which burns fuel (an exothermic chemical reaction) inside a cylinder and uses the expanding gases to drive a piston. The movement of the piston rotates the crank shaft. The remaining mechanical components such as the transmission, drive shaft, differential, axles and wheels form the power transmission mechanism that directs the power from the engine into friction forces o
Document 2:::
The machine industry or machinery industry is a subsector of the industry, that produces and maintains machines for consumers, the industry, and most other companies in the economy.
This machine industry traditionally belongs to the heavy industry. Nowadays, many smaller companies in this branch are considered part of the light industry. Most manufacturers in the machinery industry are called machine factories.
Overview
The machine industry is a subsector of the industry that produces a range of products from power tools, different types of machines, and domestic technology to factory equipment etc. On the one hand the machine industry provides:
The means of production for businesses in the agriculture, mining, industry and construction.
The means of production for public utility, such as equipment for the production and distribution of gas, electricity and water.
A range of supporting equipment for all sectors of the economy, such as equipment for heating, ventilation, and air conditioning of buildings.
These means of production are called capital goods, because a certain amount of capital is invested. Much of those production machines require regular maintenance, which becomes supplied specialized companies in the machine industry.
On the other end the machinery industry supplies consumer goods, including kitchen appliances, refrigerators, washers, dryers and a like. Production of radio and television, however, is generally considered belonging to the electrical equipment industry. The machinery industry itself is a major customer of the steel industry.
The production of the machinery industry varies widely from single-unit production and series production to mass production. Single-unit production is about constructing unique products, which are specified in specific customer requirements. Due to modular design such devices and machines can often be manufactured in small series, which significantly reduces the costs. From a certain stage in the production
Document 3:::
A machine is a physical system using power to apply forces and control movement to perform an action. The term is commonly applied to artificial devices, such as those employing engines or motors, but also to natural biological macromolecules, such as molecular machines. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power, and include a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement. They can also include computers and sensors that monitor performance and plan movement, often called mechanical systems.
Renaissance natural philosophers identified six simple machines which were the elementary devices that put a load into motion, and calculated the ratio of output force to input force, known today as mechanical advantage.
Modern machines are complex systems that consist of structural elements, mechanisms and control components and include interfaces for convenient use. Examples include: a wide range of vehicles, such as trains, automobiles, boats and airplanes; appliances in the home and office, including computers, building air handling and water handling systems; as well as farm machinery, machine tools and factory automation systems and robots.
Etymology
The English word machine comes through Middle French from Latin , which in turn derives from the Greek (Doric , Ionic 'contrivance, machine, engine', a derivation from 'means, expedient, remedy'). The word mechanical (Greek: ) comes from the same Greek roots. A wider meaning of 'fabric, structure' is found in classical Latin, but not in Greek usage. This meaning is found in late medieval French, and is adopted from the French into English in the mid-16th century.
In the 17th century, the word machine could also mean a scheme or plot, a meaning now expressed by the derived machination. The modern meaning develops out of specialized application of the term to st
Document 4:::
This is an alphabetical list of articles pertaining specifically to mechanical engineering. For a broad overview of engineering, please see List of engineering topics. For biographies please see List of engineers.
A
Acceleration –
Accuracy and precision –
Actual mechanical advantage –
Aerodynamics –
Agitator (device) –
Air handler –
Air conditioner –
Air preheater –
Allowance –
American Machinists' Handbook –
American Society of Mechanical Engineers –
Ampere –
Applied mechanics –
Antifriction –
Archimedes' screw –
Artificial intelligence –
Automaton clock –
Automobile –
Automotive engineering –
Axle –
Air Compressor
B
Backlash –
Balancing –
Beale Number –
Bearing –
Belt (mechanical) –
Bending –
Biomechatronics –
Bogie –
Brittle –
Buckling –
Bus--
Bushing –
Boilers & boiler systems
BIW--
C
CAD –
CAM –
CAID –
Calculator –
Calculus –
Car handling –
Carbon fiber –
Classical mechanics –
Clean room design –
Clock –
Clutch –
CNC –
Coefficient of thermal expansion –
Coil spring –
Combustion –
Composite material –
Compression ratio –
Compressive strength –
Computational fluid dynamics –
Computer –
Computer-aided design –
Computer-aided industrial design –
Computer-numerically controlled –
Conservation of mass –
Constant-velocity joint –
Constraint –
Continuum mechanics –
Control theory –
Corrosion –
Cotter pin –
Crankshaft –
Cybernetics –
D
Damping ratio –
Deformation (engineering) –
Delamination –
Design –
Diesel Engine –
Differential –
Dimensionless number –
Diode –
Diode laser –
Drafting –
Drifting –
Driveshaft –
Dynamics –
Design for Manufacturability for CNC machining –
E
Elasticity –
Elasticity tensor -
Electric motor –
Electrical engineering –
Electrical circuit –
Electrical network –
Electromagnetism –
Electronic circuit –
Electronics –
Energy –
Engine –
Engineering –
Engineering cybernetics –
Engineering drawing –
Engineering economics –
Engineering ethics –
Engineering management –
Engineering society –
Exploratory engineering –
F
( Fits and tolerances)---
Fa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Another measure of the effectiveness of a machine is its what?
A. mechanical advantage
B. cost benefit
C. aesthetic effect
D. chemical advantage
Answer:
|
|
sciq-10156
|
multiple_choice
|
What type of reaction is a transfer of a proton from one molecule or ion to another?
|
[
"acid-base",
"ionization",
"thermal reaction",
"ionic bonding"
] |
A
|
Relavent Documents:
Document 0:::
Gas phase ion chemistry is a field of science encompassed within both chemistry and physics. It is the science that studies ions and molecules in the gas phase, most often enabled by some form of mass spectrometry. By far the most important applications for this science is in studying the thermodynamics and kinetics of reactions. For example, one application is in studying the thermodynamics of the solvation of ions. Ions with small solvation spheres of 1, 2, 3... solvent molecules can be studied in the gas phase and then extrapolated to bulk solution.
Theory
Transition state theory
Transition state theory is the theory of the rates of elementary reactions which assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated complexes.
RRKM theory
RRKM theory is used to compute simple estimates of the unimolecular ion decomposition reaction rates from a few characteristics of the potential energy surface.
Gas phase ion formation
The process of converting an atom or molecule into an ion by adding or removing charged particles such as electrons or other ions can occur in the gas phase. These processes are an important component of gas phase ion chemistry.
Associative ionization
Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion.
where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+.
One or both of the interacting species may have excess internal energy.
Charge-exchange ionization
Charge-exchange ionization (also called charge-transfer ionization) is a gas phase reaction between an ion and a neutral species
in which the charge of the ion is transferred to the neutral.
Chemical ionization
In chemical ionization, ions are produced through the reaction of ions of a reagent gas with other species. Some common reagent gases include: methane, ammonia, and isobutane.
Chemi-ionization
Chemi-ionization can
Document 1:::
Hydrogen–deuterium exchange (also called H–D or H/D exchange) is a chemical reaction in which a covalently bonded hydrogen atom is replaced by a deuterium atom, or vice versa. It can be applied most easily to exchangeable protons and deuterons, where such a transformation occurs in the presence of a suitable deuterium source, without any catalyst. The use of acid, base or metal catalysts, coupled with conditions of increased temperature and pressure, can facilitate the exchange of non-exchangeable hydrogen atoms, so long as the substrate is robust to the conditions and reagents employed. This often results in perdeuteration: hydrogen-deuterium exchange of all non-exchangeable hydrogen atoms in a molecule.
An example of exchangeable protons which are commonly examined in this way are the protons of the amides in the backbone of a protein. The method gives information about the solvent accessibility of various parts of the molecule, and thus the tertiary structure of the protein. The theoretical framework for understanding hydrogen exchange in proteins was first described by Kaj Ulrik Linderstrøm-Lang and he was the first to apply H/D exchange to study proteins.
Exchange reaction
In protic solution exchangeable protons such as those in hydroxyl or amine group exchange protons with the solvent. If D2O is solvent, deuterons will be incorporated at these positions. The exchange reaction can be followed using a variety of methods (see Detection). Since this exchange is an equilibrium reaction, the molar amount of deuterium should be high compared to the exchangeable protons of the substrate. For instance, deuterium is added to a protein in H2O by diluting the H2O solution with D2O (e.g. tenfold). Usually exchange is performed at physiological pH (7.0–8.0) where proteins are in their most native ensemble of conformational states.
The H/D exchange reaction can also be catalysed, by acid, base or metal catalysts such as platinum. For the backbone amide hydrogen atoms of p
Document 2:::
Electron transfer (ET) occurs when an electron relocates from an atom or molecule to another such chemical entity. ET is a mechanistic description of certain kinds of redox reactions involving transfer of electrons.
Electrochemical processes are ET reactions. ET reactions are relevant to photosynthesis and respiration and commonly involve transition metal complexes. In organic chemistry ET is a step in some commercial polymerization reactions. It is foundational to photoredox catalysis.
Classes of electron transfer
Inner-sphere electron transfer
In inner-sphere ET, the two redox centers are covalently linked during the ET. This bridge can be permanent, in which case the electron transfer event is termed intramolecular electron transfer. More commonly, however, the covalent linkage is transitory, forming just prior to the ET and then disconnecting following the ET event. In such cases, the electron transfer is termed intermolecular electron transfer. A famous example of an inner sphere ET process that proceeds via a transitory bridged intermediate is the reduction of [CoCl(NH3)5]2+ by [Cr(H2O)6]2+. In this case, the chloride ligand is the bridging ligand that covalently connects the redox partners.
Outer-sphere electron transfer
In outer-sphere ET reactions, the participating redox centers are not linked via any bridge during the ET event. Instead, the electron "hops" through space from the reducing center to the acceptor. Outer sphere electron transfer can occur between different chemical species or between identical chemical species that differ only in their oxidation state. The latter process is termed self-exchange. As an example, self-exchange describes the degenerate reaction between permanganate and its one-electron reduced relative manganate:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
In general, if electron transfer is faster than ligand substitution, the reaction will follow the outer-sphere electron transfer.
Often occurs when one/both re
Document 3:::
Outer sphere refers to an electron transfer (ET) event that occurs between chemical species that remain separate and intact before, during, and after the ET event. In contrast, for inner sphere electron transfer the participating redox sites undergoing ET become connected by a chemical bridge. Because the ET in outer sphere electron transfer occurs between two non-connected species, the electron is forced to move through space from one redox center to the other.
Marcus theory
The main theory that describes the rates of outer sphere electron transfer was developed by Rudolph A. Marcus in the 1950s. A major aspect of Marcus theory is the dependence of the electron transfer rate on the thermodynamic driving force (difference in the redox potentials of the electron-exchanging sites). For most reactions, the rates increase with increased driving force. A second aspect is that the rate of outer sphere electron-transfer depends inversely on the "reorganizational energy." Reorganization energy describes the changes in bond lengths and angles that are required for the oxidant and reductant to switch their oxidation states. This energy is assessed by measurements of the self-exchange rates (see below).
Outer sphere electron transfer is the most common type of electron transfer, especially in biochemistry, where redox centers are separated by several (up to about 11) angstroms by intervening protein. In biochemistry, there are two main types of outer sphere ET: ET between two biological molecules or fixed distance electron transfer, in which the electron transfers within a single biomolecule (e.g., intraprotein).
Examples
Self-exchange
Outer sphere electron transfer can occur between chemical species that are identical except for their oxidation state. This process is termed self-exchange. An example is the degenerate reaction between the tetrahedral ions permanganate and manganate:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
For octahedral metal complexes, the rate co
Document 4:::
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide.
Radiation interactions with matter
As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system.
Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation.
An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usuall
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of reaction is a transfer of a proton from one molecule or ion to another?
A. acid-base
B. ionization
C. thermal reaction
D. ionic bonding
Answer:
|
|
sciq-7906
|
multiple_choice
|
When describing motion, what factor is just as important as distance?
|
[
"pressure",
"momentum",
"velocity",
"direction"
] |
D
|
Relavent Documents:
Document 0:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 1:::
<noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that
Document 2:::
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies.
Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration.
Constant velocity vs acceleration
To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed.
For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration.
Difference between speed and velocity
While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction.
Equation of motion
Average velocity
Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
In physics, a number of noted theories of the motion of objects have developed. Among the best known are:
Classical mechanics
Newton's laws of motion
Euler's laws of motion
Cauchy's equations of motion
Kepler's laws of planetary motion
General relativity
Special relativity
Quantum mechanics
Motion (physics)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When describing motion, what factor is just as important as distance?
A. pressure
B. momentum
C. velocity
D. direction
Answer:
|
|
ai2_arc-832
|
multiple_choice
|
Which factors can have the greatest effect on the health of a river system?
|
[
"type of soil and salinity",
"nitrate levels and turbidity",
"human consumption and pH",
"natural disasters and tidal changes"
] |
B
|
Relavent Documents:
Document 0:::
Nutrient cycling in the Columbia River Basin involves the transport of nutrients through the system, as well as transformations from among dissolved, solid, and gaseous phases, depending on the element. The elements that constitute important nutrient cycles include macronutrients such as nitrogen (as ammonium, nitrite, and nitrate), silicate, phosphorus, and micronutrients, which are found in trace amounts, such as iron. Their cycling within a system is controlled by many biological, chemical, and physical processes.
The Columbia River Basin is the largest freshwater system of the Pacific Northwest, and due to its complexity, size, and modification by humans, nutrient cycling within the system is affected by many different components. Both natural and anthropogenic processes are involved in the cycling of nutrients. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams.
Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of n
Document 1:::
River ecosystems are flowing waters that drain the landscape, and include the biotic (living) interactions amongst plants, animals and micro-organisms, as well as abiotic (nonliving) physical and chemical interactions of its many parts. River ecosystems are part of larger watershed networks or catchments, where smaller headwater streams drain into mid-size streams, which progressively drain into larger river networks. The major zones in river ecosystems are determined by the river bed's gradient or by the velocity of the current. Faster moving turbulent water typically contains greater concentrations of dissolved oxygen, which supports greater biodiversity than the slow-moving water of pools. These distinctions form the basis for the division of rivers into upland and lowland rivers.
The food base of streams within riparian forests is mostly derived from the trees, but wider streams and those that lack a canopy derive the majority of their food base from algae. Anadromous fish are also an important source of nutrients. Environmental threats to rivers include loss of water, dams, chemical pollution and introduced species. A dam produces negative effects that continue down the watershed. The most important negative effects are the reduction of spring flooding, which damages wetlands, and the retention of sediment, which leads to the loss of deltaic wetlands.
River ecosystems are prime examples of lotic ecosystems. Lotic refers to flowing water, from the Latin , meaning washed. Lotic waters range from springs only a few centimeters wide to major rivers kilometers in width. Much of this article applies to lotic ecosystems in general, including related lotic systems such as streams and springs. Lotic ecosystems can be contrasted with lentic ecosystems, which involve relatively still terrestrial waters such as lakes, ponds, and wetlands. Together, these two ecosystems form the more general study area of freshwater or aquatic ecology.
The following unifying characterist
Document 2:::
Definition
Ecohydraulics is an interdisciplinary science studying (1) the hydrodynamic factors that affect the survival and reproduction of aquatic organisms and (2) the activities of aquatic organisms that affect hydraulics and water quality. Considerations include habitat maintenance or development, habitat-flow interactions, and organism responses. Ecohydraulics assesses the magnitude and timing of flows necessary to maintain a river ecosystem and provides tools to characterize the relation between flow discharge, flow field, and the availability of habitat within a river ecosystem. Based on this relation and insights into the hydraulic conditions optimal for different species or communities, ecohydraulics-modeling predicts how hydraulic conditions in a river change, under different development scenarios, the aquatic habitat of species or ecological communities. Similar considerations also apply to coastal, lake, and marine eco-systems.
In the past century, hydraulic engineers have been challenged by habitat modeling, complicated by lack of knowledge regarding ecohydraulics. Since the 1990s, especially after the first International Symposium on Ecohydraulics in 1994, ecohydraulics has developed rapidly, mainly to assess the impacts of human-induced changes of water flow and sediment conditions in river ecosystems...
Ecohydraulics analyzes, models, and seeks to mitigate the adverse impacts of changes in hydraulic characteristics caused by dam construction and other human activities, on the suitability of habitat for organisms, such as fish and invertebrates, and to predict changes in biological communities and biodiversity . Many articles report research findings about fluvial ecohydraulics . For example, the International Association for Hydro-Environment Engineering and Research (IAHR) and Taylor & Francis have been publishing the Journal of Ecohydraulics since 2016. The journal spans all topics in natural and applied ecohydraulics in all environmen
Document 3:::
The River Continuum Concept (RCC) is a model for classifying and describing flowing water, in addition to the classification of individual sections of waters after the occurrence of indicator organisms. The theory is based on the concept of dynamic equilibrium in which streamforms balance between physical parameters, such as width, depth, velocity, and sediment load, also taking into account biological factors. It offers an introduction to map out biological communities and also an explanation for their sequence in individual sections of water. This allows the structure of the river to be more predictable as to the biological properties of the water. The concept was first developed in 1980 by Robin L. Vannote, with fellow researchers at Stroud Water Research Center.
Background of RCC
The River Continuum Concept is based on the idea that a watercourse is an open ecosystem that is in constant interaction with the bank, and moving from source to mouth, constantly changing. Basis for this change in the overall system is due to the gradual change of physical environmental conditions such as the width, depth, water, flow characteristics, temperature, and the complexity of the water. According to Vannote's hypothesis, which is based on the physical geomorphological theory, structural and functional characteristics of stream communities are selected to conform to the most probable position or mean state of the physical system. As a river changes from headwaters to the lower reaches, there will be a change in the relationship between the production and consumption (respiration) of the material (P/R ratio).
The four scientists who collaborated with Dr. Vannote were Drs. G.Wayne Minshall (Idaho State University), Kenneth W. Cummins (Michigan State University), James R. Sedell (Oregon State University), and Colbert E. Cushing (Battelle-Pacific Northwest Laboratory). The group studied stream and river ecosystems in their respective geographical areas to support or disp
Document 4:::
SPEARpesticides (Species At Risk) is a trait based biological indicator system for streams which quantitatively links pesticide contamination to the composition of invertebrate communities. The approach uses species traits that characterize the ecological requirements posed by pesticide contamination in running waters. Therefore, it is highly specific and only slightly influenced by other environmental factors. SPEARpesticides is linked to the quality classes of the EU Water Framework Directive (WFD)
History
SPEARpesticides has been first developed for Central Germany and updated. SPEARpesticides was adapted and validated for streams and mesocosms worldwide and provides the first ecotoxicological approach to specifically determine the ecological effects of pesticides on aquatic invertebrate communities. Denmark, Finland, France, Germany, Switzerland
Australia
Russia
Mesocosms
Calculation
SPEARpesticides estimates pesticide effects and contamination. The calculation is based on monitoring data of invertebrate communities as ascertained for the EU Water Framework Directive (WFD). A simplified version of SPEARpesticides is included in the ASTERICS software for assessing the ecological quality of rivers. A detailed analysis is enabled by the free SPEAR Calculator. The SPEAR Calculator provides most recent information on species traits and allows specific user settings.
The SPEARpesticides index is computed as relative abundance of vulnerable 'SPecies At Risk' (SPEAR) to be affected by pesticides. Relevant species traits comprises the physiological sensitivity towards pesticides, generation time, migration ability and exposure probability. The indicator value of SPEARpesticides at a sampling site is calculated as follows:
with n = number of taxa; xi = abundance of taxon i; y = 1 if taxon i is classified as SPEAR-sensitive; y = 0 if taxon i is classified as SPEAR-insensitive.
An application is available as download for PC. Web address to download the SPEAR calculat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which factors can have the greatest effect on the health of a river system?
A. type of soil and salinity
B. nitrate levels and turbidity
C. human consumption and pH
D. natural disasters and tidal changes
Answer:
|
|
sciq-9165
|
multiple_choice
|
What is continually released during childbirth through a positive feedback mechanism and prompts uterine contractions to fetal head toward the cervix?
|
[
"vasopressin",
"insulin",
"hemoglobin",
"oxytocin"
] |
D
|
Relavent Documents:
Document 0:::
Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle.
They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance.
Function
Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin.
Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition.
Additional images
Document 1:::
Foetal cerebral redistribution or 'brain-sparing' is a diagnosis in foetal medicine. It is characterised by preferential flow of blood towards the brain at the expense of the other vital organs, and it occurs as a haemodynamic adaptation in foetuses which have placental insufficiency. The underlying mechanism is thought to be vasodilation of the cerebral arteries. Cerebral redistribution is defined by the presence of a low middle cerebral artery pulsatility index (MCA-PI). Ultrasound of the middle cerebral artery to examine the Doppler waveform is used to establish this. Although cerebral redistribution represents an effort to preserve brain development in the face of hypoxic stress, it is nonetheless associated with adverse neurodevelopmental outcome. The presence of cerebral redistribution will be one factor taken into consideration when deciding whether to artificially deliver a baby with placental insufficiency via induction of labour or caesarian section.
Additional images
Document 2:::
The human reproductive system includes the male reproductive system which functions to produce and deposit sperm; and the female reproductive system which functions to produce egg cells, and to protect and nourish the fetus until birth. Humans have a high level of sexual differentiation. In addition to differences in nearly every reproductive organ, there are numerous differences in typical secondary sex characteristics.
Human reproduction usually involves internal fertilization by sexual intercourse. In this process, the male inserts his penis into the female's vagina and ejaculates semen, which contains sperm. A small proportion of the sperm pass through the cervix into the uterus, and then into the fallopian tubes for fertilization of the ovum. Only one sperm is required to fertilize the ovum. Upon successful fertilization, the fertilized ovum, or zygote, travels out of the fallopian tube and into the uterus, where it implants in the uterine wall. This marks the beginning of gestation, better known as pregnancy, which continues for around nine months as the fetus develops. When the fetus has developed to a certain point, pregnancy is concluded with childbirth, involving labor. During labor, the muscles of the uterus contract and the cervix dilates over the course of hours, and the baby passes out of the vagina. Human infants are completely dependent on their caregivers, and require high levels of parental care. Infants rely on their caregivers for comfort, cleanliness, and food. Food may be provided by breastfeeding or formula feeding.
Structure
Female
The human female reproductive system is a series of organs primarily located inside the body and around the pelvic region of a female that contribute towards the reproductive process. The human female reproductive system contains three main parts: the vulva, which leads to the vagina, the vaginal opening, to the uterus; the uterus, which holds the developing fetus; and the ovaries, which produce the female's o
Document 3:::
The Quilligan Scholars award, named after one of the founding fathers of Maternal-Fetal Medicine, Dr. Edward J. Quilligan, is a prestigious title in the field of Maternal-Fetal Medicine granted by the Society for Maternal-Fetal Medicine and The Pregnancy Foundation to a select group of promising residents in obstetrics and gynaecology who exhibit unparalleled potential to become future leaders in the field of perinatology.
Purpose
The purpose of the Quilligan Scholars Program is to identify future leaders in Perinatology early in their training and to offer them recognition, guidance, and educational opportunities to foster their careers. These individuals traditionally exhibit leadership, commitment, and interest in teaching, research, or public policy. Some of the activities provided by the program include paid attendance to the SMFM annual meeting, the provision of special courses and experiences, and the granting of personal mentorship from current leaders in the field of Maternal-fetal Medicine.
History
The year 2013 marked the 40th anniversary of the formal establishment of Maternal-fetal medicine (MFM) as a specialty, as 16 pioneers took the MFM boards for the first time in 1973. Amongst that group of pioneers was Dr. Edward J. Quilligan, who has gone on to dedicate decades of service to the advancement of women's health, through teaching, research, and leadership. To honour his legacy and his exemplary service to modern Obstetrics, the Society for Maternal-Fetal Medicine and The Pregnancy Foundation created the Quilligan Scholars program, and the first class of five recipients was inaugurated in 2014 at the Society for Maternal-Fetal Medicine annual meeting in New Orleans, Louisiana. Though the Quilligan Scholar title confers no monetary reward, the sponsored activities are covered by gracious donations from members of the medical community. The Society for Maternal-Fetal Medicine has agreed to give matching funds to the amount raised by The Pregnancy Fou
Document 4:::
Prenatal perception is the study of the extent of somatosensory and other types of perception during pregnancy. In practical terms, this means the study of fetuses; none of the accepted indicators of perception are present in embryos. Studies in the field inform the abortion debate, along with certain related pieces of legislation in countries affected by that debate. As of 2022, there is no scientific consensus on whether a fetus can feel pain.
Prenatal hearing
Numerous studies have found evidence indicating a fetus's ability to respond to auditory stimuli. The earliest fetal response to a sound stimulus has been observed at 16 weeks' gestational age, while the auditory system is fully functional at 25–29 weeks' gestation. At 33–41 weeks' gestation, the fetus is able to distinguish its mother's voice from others.
Prenatal pain
The hypothesis that human fetuses are capable of perceiving pain in the first trimester has little support, although fetuses at 14 weeks may respond to touch. A multidisciplinary systematic review from 2005 found limited evidence that thalamocortical pathways begin to function "around 29 to 30 weeks' gestational age", only after which a fetus is capable of feeling pain.
In March 2010, the Royal College of Obstetricians and Gynecologists submitted a report, concluding that "Current research shows that the sensory structures are not developed or specialized enough to respond to pain in a fetus of less than 24 weeks",
The report specifically identified the anterior cingulate as the area of the cerebral cortex responsible for pain processing. The anterior cingulate is part of the cerebral cortex, which begins to develop in the fetus at week 26. A co-author of that report revisited the evidence in 2020, specifically the functionality of the thalamic projections into the cortical subplate, and posited "an immediate and unreflective pain experience...from as early as 12 weeks."
There is a consensus among developmental neurobiologists that the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is continually released during childbirth through a positive feedback mechanism and prompts uterine contractions to fetal head toward the cervix?
A. vasopressin
B. insulin
C. hemoglobin
D. oxytocin
Answer:
|
|
sciq-3541
|
multiple_choice
|
What chemical substances are secreted by animals that communicate by odor or taste?
|
[
"alaki",
"hormones",
"pheromones",
"acids"
] |
C
|
Relavent Documents:
Document 0:::
Olfactory glands, also known as Bowman's glands, are a type of nasal gland situated in the part of the olfactory mucosa beneath the olfactory epithelium, that is the lamina propria, a connective tissue also containing fibroblasts, blood vessels and bundles of fine axons from the olfactory neurons.
An olfactory gland consists of an acinus in the lamina propria and a secretory duct going out through the olfactory epithelium.
Electron microscopy studies show that olfactory glands contain cells with large secretory vesicles. Olfactory glands secrete the gel-forming mucin protein MUC5B. They might secrete proteins such as lactoferrin, lysozyme, amylase and IgA, similarly to serous glands. The exact composition of the secretions from olfactory glands is unclear, but there is evidence that they produce odorant-binding protein.
Function
The olfactory glands are tubuloalveolar glands surrounded by olfactory receptors and sustentacular cells in the olfactory epithelium. These glands produce mucous to lubricate the olfactory epithelium and dissolve odorant-containing gases. Several olfactory binding proteins are produced from the olfactory glands that help facilitate the transportation of odorants to the olfactory receptors. These cells exhibit the mRNA to transform growth factor α, stimulating the production of new olfactory receptor cells.
See also
William Bowman
List of distinct cell types in the adult human body
Document 1:::
The biochemistry of body odor pertains to the chemical compounds in the body responsible for body odor and their kinetics.
Causes
Body odor encompasses axillary (underarm) odor and foot odor. It is caused by a combination of sweat gland secretions and normal skin microflora. In addition, androstane steroids and the ABCC11 transporter are essential for most axillary odor. Body odor is a complex phenomenon, with numerous compounds and catalysts involved in its genesis. Secretions from sweat glands are initially odorless, but preodoriferous compounds or malodor precursors in the secretions are transformed by skin surface bacteria into volatile odorous compounds that are responsible for body malodor. Water and nutrients secreted by sweat glands also contribute to body odor by creating an ideal environment for supporting the growth of skin surface bacteria.
Types
There are three types of sweat glands: eccrine, apocrine, and apoeccrine. Apocrine glands are primarily responsible for body malodor and, along with apoeccrine glands, are mostly expressed in the axillary (underarm) regions, whereas eccrine glands are distributed throughout virtually all of the rest of the skin in the body, although they are also particularly expressed in the axillary regions, and contribute to malodor to a relatively minor extent. Sebaceous glands, another type of secretory gland, are not sweat glands but instead secrete sebum (an oily substance), and may also contribute to body odor to some degree.
The main odorous compounds that contribute to axillary odor include:
Unsaturated or hydroxylated branched fatty acids, with the key ones being (E)-3-methyl-2-hexenoic acid (3M2H) and 3-hydroxy-3-methylhexanoic acid (HMHA)
Sulfanylalkanols, particularly 3-methyl-3-sulfanylhexan-1-ol (3M3SH)
Odoriferous androstane steroids, namely the pheromones androstenone (5α-androst-16-en-3-one) and androstenol (5α-androst-16-en-3α-ol)
These malodorous compounds are formed from non-odoriferous precursors
Document 2:::
Poison shyness, also called conditioned food aversion, is the avoidance of a toxic substance by an animal that has previously ingested that substance. Animals learn an association between stimulus characteristics, usually the taste or odor, of a toxic substance and the illness it produces; this allows them to detect and avoid the substance. Poison shyness occurs as an evolutionary adaptation in many animals, most prominently in generalists that feed on many different materials. It is often called bait shyness when it occurs during attempts at pest control of insects and animals. If the pest ingests the poison bait at sublethal doses, it typically detects and avoids the bait, rendering the bait ineffective.
In nature
For any organism to survive, it must have adaptive mechanisms to avoid toxicosis. In mammals, a variety of behavioral and physiological mechanisms have been identified that allow them to avoid being poisoned. First, there are innate rejection mechanisms such as the rejection of toxic materials that taste bitter. Second, there are other physiologically adaptive responses such as vomiting or alterations in the digestion and processing of toxic materials. Third, there are learned aversions to distinctive foods if ingestion is followed by illness.
A typical experiment tested food aversion learning in squirrel monkeys (Saimiri sciureus) and common marmosets (Callithrix jacchus), using several kinds of cues. Both species showed one-trial learning with the visual cues of color and shape, whereas only the marmosets did so with an olfactory cue. Both species showed a tendency for quicker acquisition of the association with visual cues than with the olfactory cue. All individuals from both species were able to remember the significance of the visual cues, color and shape, even after 4 months. However, illness was not necessarily prerequisite for food avoidance learning in these species, for highly concentrated but non-toxic bitter and sour tastes also induced r
Document 3:::
Vomeronasal receptors are a class of olfactory receptors that putatively function as receptors for pheromones. Pheromones have evolved in all animal phyla, to signal sex and dominance status, and are responsible for stereotypical social and sexual behaviour among members of the same species. In mammals, these chemical signals are believed to be detected primarily by the vomeronasal organ (VNO), a chemosensory organ located at the base of the nasal septum.
The VNO is present in most amphibia, reptiles and non-primate mammals but is absent in birds, adult catarrhine monkeys and apes. An active role for the human VNO in the detection of pheromones is disputed; the VNO is clearly present in the fetus but appears to be atrophied or absent in adults. Two distinct families of vomeronasal receptors – which putatively function as pheromone receptors – have been identified in the vomeronasal organ (V1Rs and V2Rs). While all are G protein-coupled receptors (GPCRs), they are distantly related to the receptors of the main olfactory system, highlighting their different role.
The V1 receptors share between 50 and 90% sequence identity but have little similarity to other families of G protein-coupled receptors. They appear to be distantly related to the mammalian T2R bitter taste receptors and the rhodopsin-like GPCRs. In rat, the family comprises 30–40 genes. These are expressed in the apical regions of the VNO, in neurons expressing Gi2. Coupling of the receptors to this protein mediates inositol trisphosphate signaling. A number of human V1 receptor homologues have also been found. The majority of these human sequences are pseudogenes, but an apparently functional receptor has been identified that is expressed in the human olfactory system.
The V2 receptors are members of GPCR family 3 and have close similarity to the extracellular calcium-sensing receptors. Rodents appear to have around 100 functional V2 receptors and many pseudogenes. These receptors are expressed in the ba
Document 4:::
Olfactory receptor, family 6, subfamily C, member 75 is a protein in humans that is encoded by the OR6C75 gene.
Olfactory receptors interact with odorant molecules in the nose to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter receptors and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What chemical substances are secreted by animals that communicate by odor or taste?
A. alaki
B. hormones
C. pheromones
D. acids
Answer:
|
|
sciq-7859
|
multiple_choice
|
Which human body system system moves nutrients and other substances throughout the body?
|
[
"Muscular system",
"Integumentary system",
"Digestive system",
"cardiovascular system"
] |
D
|
Relavent Documents:
Document 0:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
Document 1:::
The human body is the structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. They ensure homeostasis and the viability of the human body.
It comprises a head, hair, neck, torso (which includes the thorax and abdomen), arms and hands, legs and feet.
The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar and oxygen in the blood.
The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work.
Composition
The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body.
The adult male body is about 60% water for a total water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates.
Cells
The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 3037trillion cells in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The body is also host to about the same number of non-human cells as well as multicellular organisms which reside in the gastrointestinal tract and on the skin. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen,
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 4:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which human body system system moves nutrients and other substances throughout the body?
A. Muscular system
B. Integumentary system
C. Digestive system
D. cardiovascular system
Answer:
|
|
sciq-3355
|
multiple_choice
|
B cells and t cells are examples of what type of cells?
|
[
"white blood cells",
"cancer cells",
"skin cells",
"heart cells"
] |
A
|
Relavent Documents:
Document 0:::
White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood
Document 1:::
A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
Document 2:::
Lymph node stromal cells are essential to the structure and function of the lymph node whose functions include: creating an internal tissue scaffold for the support of hematopoietic cells; the release of small molecule chemical messengers that facilitate interactions between hematopoietic cells; the facilitation of the migration of hematopoietic cells; the presentation of antigens to immune cells at the initiation of the adaptive immune system; and the homeostasis of lymphocyte numbers. Stromal cells originate from multipotent mesenchymal stem cells.
Structure
Lymph nodes are enclosed in an external fibrous capsule, from which thin walls of sinew called trabeculae penetrate into the lymph node, partially dividing it. Beneath the external capsule and along the courses of the trabeculae, are peritrabecular and subcapsular sinuses. These sinuses are cavities containing macrophages (specialised cells which help to keep the extracellular matrix in order).
The interior of the lymph node has two regions: the cortex and the medulla. In the cortex, lymphoid tissue is organized into nodules. In the nodules, T lymphocytes are located in the T cell zone. B lymphocytes are located in the B cell follicle. The primary B cell follicle matures in germinal centers. In the medulla are hematopoietic cells (which contribute to the formation of the blood) and stromal cells.
Near the medulla is the hilum of lymph node. This is the place where blood vessels enter and leave the lymph node and lymphatic vessels leave the lymph node. Lymph vessels entering the node do so along the perimeter (outer surface).
Function
The lymph nodes, the spleen and Peyer's patches, together are known as secondary lymphoid organs. Lymph nodes are found between lymphatic ducts and blood vessels. Afferent lymphatic vessels bring lymph fluid from the peripheral tissues to the lymph nodes. The lymph tissue in the lymph nodes consists of immune cells (95%), for example lymphocytes, and stromal cells (1% to
Document 3:::
B cells, also known as B lymphocytes, are a type of white blood cell of the lymphocyte subtype. They function in the humoral immunity component of the adaptive immune system. B cells produce antibody molecules which may be either secreted or inserted into the plasma membrane where they serve as a part of B-cell receptors. When a naïve or memory B cell is activated by an antigen, it proliferates and differentiates into an antibody-secreting effector cell, known as a plasmablast or plasma cell. Additionally, B cells present antigens (they are also classified as professional antigen-presenting cells (APCs)) and secrete cytokines. In mammals, B cells mature in the bone marrow, which is at the core of most bones. In birds, B cells mature in the bursa of Fabricius, a lymphoid organ where they were first discovered by Chang and Glick, which is why the 'B' stands for bursa and not bone marrow as commonly believed.
B cells, unlike the other two classes of lymphocytes, T cells and natural killer cells, express B cell receptors (BCRs) on their cell membrane. BCRs allow the B cell to bind to a foreign antigen, against which it will initiate an antibody response. B cell receptors are extremely specific, with all BCRs on a B cell recognizing the same epitope.
Development
B cells develop from hematopoietic stem cells (HSCs) that originate from bone marrow. HSCs first differentiate into multipotent progenitor (MPP) cells, then common lymphoid progenitor (CLP) cells. From here, their development into B cells occurs in several stages (shown in image to the right), each marked by various gene expression patterns and immunoglobulin H chain and L chain gene loci arrangements, the latter due to B cells undergoing V(D)J recombination as they develop.
B cells undergo two types of selection while developing in the bone marrow to ensure proper development, both involving B cell receptors (BCR) on the surface of the cell. Positive selection occurs through antigen-independent signalling inv
Document 4:::
Null cells are large granular lymphocytes that develop inside the bone marrow and attack pathogens and abnormal cells. These cells do not have receptors like one would typically find on either mature B cells or T cells. There are common characteristics that null cells lack to be categorized into surface markers in mature B-cells and T-cells. Null cells are, in fact, T cells that fail to express CD2. Even though they are large granular lymphocytes, they are still relatively small, chromophobic cells. When the term chromophobic is used, it means when viewed under a light microscope. These cells appear to be small. Null cells are present in small numbers in lymphoid organs but are often found in nonlymphoid tissues. While they do not contain known anterior pituitary hormones in their cytoplasm, they do contain secretory granules that may contain various properties like; hormone pieces, forerunners, or biologically inactive substances. These cells are seen as a representation of resting cells, precursors of various cell types, or an unknown cell type.
Null cells account for a small proportion of the lymphocytes found in an organism. They are quick to act in the presence of pathogens like viruses and attack viral-infected or tumor cells in a non-MHC-restricted manner. The number of null cells has increased over time in subpopulations of mononuclear cells. Mononuclear cells are blood cells that have a round and single nucleus like lymphocytes and monocytes. They are called peripheral blood mononuclear cells (PBMC) when isolated from circulating blood. However, they are found elsewhere, like the umbilical cord, spleen, and bone marrow. With null cells increasing during an immune response, the changes are believed to be due to defects involved with an aging immune system and can be used as a representation of a healthy immune system in the healthy aged group, which is linked to survival.
Null cells are in small numbers in lymphoid organs but are often found in nonlymph
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
B cells and t cells are examples of what type of cells?
A. white blood cells
B. cancer cells
C. skin cells
D. heart cells
Answer:
|
|
sciq-5146
|
multiple_choice
|
What vessels supply blood to the myocardium and other components of the heart?
|
[
"surface arteries",
"rapid arteries",
"coronary arteries",
"specialized arteries"
] |
C
|
Relavent Documents:
Document 0:::
The coronary arteries are the arterial blood vessels of coronary circulation, which transport oxygenated blood to the heart muscle. The heart requires a continuous supply of oxygen to function and survive, much like any other tissue or organ of the body.
The coronary arteries wrap around the entire heart. The two main branches are the left coronary artery and right coronary artery. The arteries can additionally be categorized based on the area of the heart for which they provide circulation. These categories are called epicardial (above the epicardium, or the outermost tissue of the heart) and microvascular (close to the endocardium, or the innermost tissue of the heart).
Reduced function of the coronary arteries can lead to decreased flow of oxygen and nutrients to the heart. Not only does this affect supply to the heart muscle itself, but it also can affect the ability of the heart to pump blood throughout the body. Therefore, any disorder or disease of the coronary arteries can have a serious impact on health, possibly leading to angina, a heart attack, and even death.
Structure
The coronary arteries are mainly composed of the left and right coronary arteries, both of which give off several branches, as shown in the 'coronary artery flow' figure.
Aorta
Left coronary artery
Left anterior descending artery
Left circumflex artery
Posterior descending artery
Ramus or intermediate artery
Right coronary artery
Right marginal artery
Posterior descending artery
The left coronary artery arises from the aorta within the left cusp of the aortic valve and feeds blood to the left side of the heart. It branches into two arteries, the left anterior descending and the left circumflex. The left anterior descending artery perfuses the interventricular septum and anterior wall of the left ventricle. The left circumflex artery perfuses the left ventricular free wall. In approximately 33% of individuals, the left coronary artery gives rise to the posterior descending artery wh
Document 1:::
Great vessels are the large vessels that bring blood to and from the heart. These are:
Superior vena cava
Inferior vena cava
Pulmonary arteries
Pulmonary veins
Aorta
Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels.
Document 2:::
The uterine artery supplies branches to the cervix uteri and others which descend on the vagina; the latter anastomose with branches of the vaginal arteries and form with them two median longitudinal vessels—the vaginal branches of uterine artery (or azygos arteries of the vagina)—one of which runs down in front of and the other behind the vagina.
Document 3:::
Veins () are blood vessels in the circulatory system of humans and most other animals that carry blood toward the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are those of the pulmonary and fetal circulations which carry oxygenated blood to the heart. In the systemic circulation arteries carry oxygenated blood away from the heart, and veins return deoxygenated blood to the heart, in the deep veins.
There are three sizes of veins, large, medium, and small. Smaller veins are called venules, and the smallest the post-capillary venules are microscopic that make up the veins of the microcirculation. Veins are often closer to the skin than arteries.
Veins have less smooth muscle and connective tissue and wider internal diameters than arteries. Because of their thinner walls and wider lumens they are able to expand and hold more blood. This greater capacity gives them the term of capacitance vessels. At any time, nearly 70% of the total volume of blood in the human body is in the veins. In medium and large sized veins the flow of blood is maintained by one-way (unidirectional) venous valves to prevent backflow. In the lower limbs this is also aided by muscle pumps, also known as venous pumps that exert pressure on intramuscular veins when they contract and drive blood back to the heart.
Structure
There are three sizes of vein, large, medium, and small. Smaller veins are called venules. The smallest veins are the post-capillary venules. Veins have a similar three-layered structure to arteries. The layers known as tunicae have a concentric arrangement that forms the wall of the vessel. The outer layer, is a thick layer of connective tissue called the tunica externa or adventitia; this layer is absent in the post-capillary venules. The middle layer, consists of bands of smooth muscle and is known as the tunica media. The inner layer, is a thin lining of endothelium known as the tunica intima. The tunica media in the veins is mu
Document 4:::
The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit.
The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation.
The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins.
A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung.
Structure
De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery.
Lungs
The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart.
Veins
Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What vessels supply blood to the myocardium and other components of the heart?
A. surface arteries
B. rapid arteries
C. coronary arteries
D. specialized arteries
Answer:
|
|
sciq-2972
|
multiple_choice
|
The dna is wound around proteins called what?
|
[
"leptons",
"pepsins",
"histones",
"nucleotides"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 1:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
Document 2:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 3:::
What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"
In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.
Background
The book, published i
Document 4:::
A DNA machine is a molecular machine constructed from DNA. Research into DNA machines was pioneered in the late 1980s by Nadrian Seeman and co-workers from New York University. DNA is used because of the numerous biological tools already found in nature that can affect DNA, and the immense knowledge of how DNA works previously researched by biochemists.
DNA machines can be logically designed since DNA assembly of the double helix is based on strict rules of base pairing that allow portions of the strand to be predictably connected based on their sequence. This "selective stickiness" is a key advantage in the construction of DNA machines.
An example of a DNA machine was reported by Bernard Yurke and co-workers at Lucent Technologies in the year 2000, who constructed molecular tweezers out of DNA.
The DNA tweezers contain three strands: A, B and C. Strand A latches onto half of strand B and half of strand C, and so it joins them all together. Strand A acts as a hinge so that the two "arms" — AB and AC — can move. The structure floats with its arms open wide. They can be pulled shut by adding a fourth strand of DNA (D) "programmed" to stick to both of the dangling, unpaired sections of strands B and C. The closing of the tweezers was proven by tagging strand A at either end with light-emitting molecules that do not emit light when they are close together. To re-open the tweezers add a further strand (E) with the right sequence to pair up with strand D. Once paired up, they have no connection to the machine BAC, so float away. The DNA machine can be opened and closed repeatedly by cycling between strands D and E. These tweezers can be used for removing drugs from inside fullerenes as well as from a self assembled DNA tetrahedron. The state of the device can be determined by measuring the separation between donor and acceptor fluorophores using FRET.
DNA walkers are another type of DNA machine.
See also
DNA nanotechnology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The dna is wound around proteins called what?
A. leptons
B. pepsins
C. histones
D. nucleotides
Answer:
|
|
sciq-9791
|
multiple_choice
|
What is the name for a way of learning that involves reward or punishment?
|
[
"objective",
"conditioning",
"instinct",
"subjective"
] |
B
|
Relavent Documents:
Document 0:::
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machines; there is also evidence for some kind of learning in certain plants. Some learning is immediate, induced by a single event (e.g. being burned by a hot stove), but much skill and knowledge accumulate from repeated experiences. The changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be "lost" from that which cannot be retrieved.
Human learning starts at birth (it might even start before in terms of an embryo's need for both interaction with, and freedom within its environment within the womb.) and continues until death as a consequence of ongoing interactions between people and their environment. The nature and processes involved in learning are studied in many established fields (including educational psychology, neuropsychology, experimental psychology, cognitive sciences, and pedagogy), as well as emerging fields of knowledge (e.g. with a shared interest in the topic of learning from safety events such as incidents/accidents, or in collaborative learning health systems). Research in such fields has led to the identification of various sorts of learning. For example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. Learning may occur consciously or without conscious awareness. Learning that an aversive event cannot be avoided or escaped may result in a condition called learned helplessness. There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.
Play h
Document 1:::
In operant conditioning, punishment is any change in a human or animal's surroundings which, occurring after a given behavior or response, reduces the likelihood of that behavior occurring again in the future. As with reinforcement, it is the behavior, not the human/animal, that is punished. Whether a change is or is not punishing is determined by its effect on the rate that the behavior occurs. This is called motivating operations (MO), because they alter the effectiveness of a stimulus. MO can be categorized in abolishing operations, decrease the effectiveness of the stimuli and establishing, increase the effectiveness of the stimuli. For example, a painful stimulus which would act as a punisher for most people may actually reinforce some behaviors of masochistic individuals.
There are two types of punishment, positive and negative. Positive punishment involves the introduction of a stimulus to decrease behavior while negative punishment involves the removal of a stimulus to decrease behavior. While similar to reinforcement, punishment's goal is to decrease behaviors while reinforcement's goal is to increase behaviors. Different kinds of stimuli exist as well. There are rewarding stimuli which are considered pleasant and aversive stimuli, which are considered unpleasant. There are also two types of punishers. There are primary punishers which directly affect the individual such as pain and are a natural response and then there are secondary punishers which are things that are learned to be negative like a buzzing sound when getting an answer wrong on a game show.
Conflicting findings have been found on the effectiveness of the use of punishment. Some have found that punishment can be a useful tool in suppressing behavior while some have found it to have a weak effect on suppressing behavior. Punishment can also lead to lasting negative unintended side effects as well. Punishment has been found to be effective in countries that are wealthy, high in trust, cooper
Document 2:::
Behavior management, similar to behavior modification, is a less-intensive form of behavior therapy. Unlike behavior modification, which focuses on changing behavior, behavior management focuses on maintaining positive habits and behaviors and reducing negative ones. Behavior management skills are especially useful for teachers and educators, healthcare workers, and those working in supported living communities. This form of management aims to help professionals oversee and guide behavior management in individuals and groups toward fulfilling, productive, and socially acceptable behaviors. Behavior management can be accomplished through modeling, rewards, or punishment.
Research
Influential behavior management researchers B.F. Skinner and Carl Rogers both take different approaches to managing behavior.
Skinner claimed that anyone can manipulate behavior by identifying what a person finds rewarding. Once the rewards are known, they can be given in exchange for good behavior. Skinner called this "Positive Reinforcement Psychology."
Rogers proposed that the desire to behave appropriately must come before addressing behavioral problems. This is accomplished by teaching the individual about morality, including why one should do what is right. Rogers held that a person must have an internal awareness of right and wrong.
Many principles and techniques are the same as in behavior modification. However, they are considerably different and administered less often.
In the classroom
Behavior management is often applied by a classroom teacher as a form of behavioral engineering, in order to raise students' retention of material and produce higher yields of student work completion. This also helps to reduce classroom disruption and places more focus on building self-control and self-regulating a calm emotional state.
American education psychologist, Brophy (1986) writes:
In general, behavior management strategies are effective at reducing classroom disruption. Recent
Document 3:::
The reward system (the mesocorticolimbic circuit) is a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), associative learning (primarily positive reinforcement and classical conditioning), and positively-valenced emotions, particularly ones involving pleasure as a core component (e.g., joy, euphoria and ecstasy). Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach behavior, and consummatory behavior. A rewarding stimulus has been described as "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward". In operant conditioning, rewarding stimuli function as positive reinforcers; however, the converse statement also holds true: positive reinforcers are rewarding.
The reward system motivates animals to approach stimuli or engage in behaviour that increases fitness (sex, energy-dense foods, etc.). Survival for most animal species depends upon maximizing contact with beneficial stimuli and minimizing contact with harmful stimuli. Reward cognition serves to increase the likelihood of survival and reproduction by causing associative learning, eliciting approach and consummatory behavior, and triggering positively-valenced emotions. Thus, reward is a mechanism that evolved to help increase the adaptive fitness of animals. In drug addiction, certain substances over-activate the reward circuit, leading to compulsive substance-seeking behavior resulting from synaptic plasticity in the circuit.
Primary rewards are a class of rewarding stimuli which facilitate the survival of one's self and offspring, and they include homeostatic (e.g., palatable food) and reproductive (e.g., sexual contact and parental investment) rewards. Intrinsic rewards are unconditioned rewards that are attractive and motivate behavior because they are inherently pleasurable. Extrin
Document 4:::
Active student response (ASR) techniques are strategies to elicit observable responses from students in a classroom. They are grounded in the field of behavioralism and operate by increasing opportunities reinforcement during class time, typically in the form of instructor praise. Active student response techniques are designed so that student behavior, such as responding aloud to a question, is quickly followed by reinforcement if correct. Common form of active student response techniques are choral responding, response cards, guided notes, and clickers. While they are commonly used for disabled populations, these strategies can be applied at many different levels of education. Implementing active student response techniques has been shown to increase learning, but may require extra supplies or preparation by the instructor.
History
Active student response techniques are grounded in the field of behaviorism, a movement in psychology that believes behaviors are responses to stimuli and motivated by past reinforcement. The field has its origins in experiments of Edward Thorndike, who pioneered the Law of effect, which is now known as reinforcement and punishment. Thorndike explained that behaviors that produce a positive effect become more likely to reoccur, given the same scenario. Conversely, behaviors that produce a negative effect become less likely to reoccur.
Psychologist B.F. Skinner applied the principles of behaviorism to influence education. Skinner believed that students must be active in the classroom and that effective instruction is based on positive reinforcement. According to Skinner, teachers should avoid punishment, as it only teaches students to avoid punishment. Instead, lessons should be broken into small tasks with clear instruction and positive reinforcement. His beliefs led him to invent the teaching machine. Active student response techniques use Skinner's model to provide rapid reinforcement for desired responses. This increases the likel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name for a way of learning that involves reward or punishment?
A. objective
B. conditioning
C. instinct
D. subjective
Answer:
|
|
sciq-3025
|
multiple_choice
|
What is usually the prey of a protist?
|
[
"algae",
"pathogens",
"proteins",
"bacteria"
] |
D
|
Relavent Documents:
Document 0:::
A bacterivore is an organism which obtains energy and nutrients primarily or entirely from the consumption of bacteria. The term is most commonly used to describe free-living, heterotrophic, microscopic organisms such as nematodes as well as many species of amoeba and numerous other types of protozoans, but some macroscopic invertebrates are also bacterivores, including sponges, polychaetes, and certain molluscs and arthropods. Many bacterivorous organisms are adapted for generalist predation on any species of bacteria, but not all bacteria are easily digested; the spores of some species, such as Clostridium perfringens, will never be prey because of their cellular attributes.
In microbiology
Bacterivores can sometimes be a problem in microbiology studies. For instance, when scientists seek to assess microorganisms in samples from the environment (such as freshwater), the samples are often contaminated with microscopic bacterivores, which interfere with the growing of bacteria for study. Adding cycloheximide can inhibit the growth of bacterivores without affecting some bacterial species, but it has also been shown to inhibit the growth of some anaerobic prokaryotes.
Examples of bacterivores
Caenorhabditis elegans
Ceriodaphnia quadrangula
Diaphanosoma brachyura
Vorticella
Paramecium
Many species of protozoa
Many benthic meiofauna, e.g. gastrotrichs
Springtails
Many sponges, e.g. Aplysina aerophoba
Many crustaceans
Many polychaetes, e.g. feather duster worms
Some marine molluscs
See also
Microbivory
Document 1:::
A protist ( ) or protoctist is any eukaryotic organism that is not an animal, plant, or fungus. Protists do not form a natural group, or clade, but an artificial grouping of several independent clades that evolved from the last eukaryotic common ancestor.
Protists were historically regarded as a separate taxonomic kingdom known as Protista or Protoctista. With the advent of phylogenetic analysis and electron microscopy studies, the use of Protista as a formal taxon was gradually abandoned. In modern classifications, protists are spread across several eukaryotic clades called supergroups, such as Archaeplastida (which includes plants), SAR, Obazoa (which includes fungi and animals), Amoebozoa and Excavata.
Protists represent an extremely large genetic and ecological diversity in all environments, including extreme habitats. Their diversity, larger than for all other eukaryotes, has only been discovered in recent decades through the study of environmental DNA, and is still in the process of being fully described. They are present in all ecosystems as important components of the biogeochemical cycles and trophic webs. They exist abundantly and ubiquitously in a variety of forms that evolved multiple times independently, such as free-living algae, amoebae and slime moulds, or as important parasites. Together, they compose an amount of biomass that doubles that of animals. They exhibit varied types of nutrition (such as phototrophy, phagotrophy or osmotrophy), sometimes combining them (in mixotrophy). They present unique adaptations not present in multicellular animals, fungi or land plants. The study of protists is termed protistology.
Definition
There is not a single accepted definition of what protists are. As a paraphyletic assemblage of diverse biological groups, they have historically been regarded as a catch-all taxon that includes any eukaryotic organism (i.e., living beings whose cells possess a nucleus) that is not an animal, a land plant or a dikaryon fung
Document 2:::
Anti-protist or antiprotistal refers to an anti-parasitic and anti-infective agent which is active against protists. Unfortunately due to the long ingrained usage of the term antiprotozoal, the two terms are confused, when in fact protists are a supercategory. Therefore, there are protists that are not protozoans. Beyond "animal-like" (heterotrophic, including parasitic) protozoans, protists also include the "plant-like" (autotrophic) protophyta and the "fungi-like" saprophytic molds. In current biology, the concept of a "protist" and its three subdivisions has been replaced.
See also
Amebicide
Document 3:::
Many protists have protective shells or tests, usually made from silica (glass) or calcium carbonate (chalk). Protists are a diverse group of eukaryote organisms that are not plants, animals, or fungi. They are typically microscopic unicellular organisms that live in water or moist environments.
Protists shells are often tough, mineralised forms that resist degradation, and can survive the death of the protist as a microfossil. Although protists are typically very small, they are ubiquitous. Their numbers are such that their shells play a huge part in the formation of ocean sediments and in the global cycling of elements and nutrients.
The role of protist shells depends on the type of protist. Protists such as diatoms and radiolaria have intricate, glass-like shells made of silica that are hard and protective, and serve as a barrier to prevent water loss. The shells have small pores that allow for gas exchange and nutrient uptake. Coccolithophores and foraminifera also have hard protective shells, but the shells are made of calcium carbonate. These shells can help with buoyancy, allowing the organisms to float in the water column and move around more easily.
In addition to protection and support, protist shells also serve scientists as a means of identification. By examining the characteristics of the shells, different species of protists can be identified and their ecology and evolution can be studied.
Protists
Cellular life likely originated as single-celled prokaryotes (including modern bacteria and archaea) and later evolved into more complex eukaryotes. Eukaryotes include organisms such as plants, animals, fungi and "protists". Protists are usually single-celled and microscopic. They can be heterotrophic, meaning they obtain nutrients by consuming other organisms, or autotrophic, meaning they produce their own food through photosynthesis or chemosynthesis, or mixotrophic, meaning they produce their own food through a mixture of those methods.
The term prot
Document 4:::
Marine protists are defined by their habitat as protists that live in marine environments, that is, in the saltwater of seas or oceans or the brackish water of coastal estuaries. Life originated as marine single-celled prokaryotes (bacteria and archaea) and later evolved into more complex eukaryotes. Eukaryotes are the more developed life forms known as plants, animals, fungi and protists. Protists are the eukaryotes that cannot be classified as plants, fungi or animals. They are mostly single-celled and microscopic. The term protist came into use historically as a term of convenience for eukaryotes that cannot be strictly classified as plants, animals or fungi. They are not a part of modern cladistics because they are paraphyletic (lacking a common ancestor for all descendants).
Most protists are too small to be seen with the naked eye. They are highly diverse organisms currently organised into 18 phyla, but not easy to classify. Studies have shown high protist diversity exists in oceans, deep sea-vents and river sediments, suggesting large numbers of eukaryotic microbial communities have yet to be discovered. There has been little research on mixotrophic protists, but recent studies in marine environments found mixotrophic protists contribute a significant part of the protist biomass. Since protists are eukaryotes (and not prokaryotes) they possess within their cell at least one nucleus, as well as organelles such as mitochondria and Golgi bodies. Many protist species can switch between asexual reproduction and sexual reproduction involving meiosis and fertilization.
In contrast to the cells of prokaryotes, the cells of eukaryotes are highly organised. Plants, animals and fungi are usually multi-celled and are typically macroscopic. Most protists are single-celled and microscopic. But there are exceptions. Some single-celled marine protists are macroscopic. Some marine slime molds have unique life cycles that involve switching between unicellular, colonial, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is usually the prey of a protist?
A. algae
B. pathogens
C. proteins
D. bacteria
Answer:
|
|
scienceQA-12040
|
multiple_choice
|
Select the invertebrate.
|
[
"robin",
"echidna",
"western rattlesnake",
"dung beetle"
] |
D
|
A western rattlesnake is a reptile. Like other reptiles, a western rattlesnake is a vertebrate. It has a backbone.
A dung beetle is an insect. Like other insects, a dung beetle is an invertebrate. It does not have a backbone. It has an exoskeleton.
An echidna is a mammal. Like other mammals, an echidna is a vertebrate. It has a backbone.
A robin is a bird. Like other birds, a robin is a vertebrate. It has a backbone.
|
Relavent Documents:
Document 0:::
Invertebrate zoology is the subdiscipline of zoology that consists of the study of invertebrates, animals without a backbone (a structure which is found only in fish, amphibians, reptiles, birds and mammals).
Invertebrates are a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, numerous different phyla of worms, molluscs, arthropods and many additional phyla. Single-celled organisms or protists are usually not included within the same group as invertebrates.
Subdivisions
Invertebrates represent 97% of all named animal species, and because of that fact, this subdivision of zoology has many further
subdivisions, including but not limited to:
Arthropodology - the study of arthropods, which includes
Arachnology - the study of spiders and other arachnids
Entomology - the study of insects
Carcinology - the study of crustaceans
Myriapodology - the study of centipedes, millipedes, and other myriapods
Cnidariology - the study of Cnidaria
Helminthology - the study of parasitic worms.
Malacology - the study of mollusks, which includes
Conchology - the study of Mollusk shells.
Limacology - the study of slugs.
Teuthology - the study of cephalopods.
Invertebrate paleontology - the study of fossil invertebrates
These divisions are sometimes further divided into more specific specialties. For example, within arachnology, acarology is the study of mites and ticks; within entomology, lepidoptery is the study of butterflies and moths, myrmecology is the study of ants and so on. Marine invertebrates are all those invertebrates that exist in marine habitats.
History
Early Modern Era
In the early modern period starting in the late 16th century, invertebrate zoology saw growth in the number of publications made and improvement in the experimental practices associated with the field. (Insects are one of the most diverse groups of organisms on Earth. They play important roles in ecosystems, including pollination, natural enemies, saprophytes, and
Document 1:::
International Society for Invertebrate Morphology (ISIM) was founded during the 1st International Congress on Invertebrate Morphology, in Copenhagen, August 2008. The objectives of the society are to promote international collaboration and provide educational opportunities and training on invertebrate morphology, and to organize and promote the international congresses of invertebrate morphology, international meetings and other forms of scientific exchange.
The ISIM has its own Constitution
ISIM board 2014-2017
Gerhard Scholtz (President) Institute of Biology, Humboldt-Universität zu Berlin, Germany. https://www.biologie.hu-berlin.de/de/gruppenseiten/compzool/people/gerhard_scholtz_page
Natalia Biserova (President-Elect) Moscow State University, Moscow, Russia.
Gonzalo Giribet (Past-President) Museum of Comparative Zoology, Harvard University, Cambridge, MA, USA.
Julia Sigwart (Secretary)
Katrina Worsaae (Treasurer)
Greg Edgecombe (2nd term)
Andreas Hejnol (2nd term)
Sally Leys (2nd term)
Fernando Pardos (2nd term)
Katharina Jörger (1st term)
Marymegan Daly (1st term)
Georg Mayer (1st term)
ISIM board 2017-2020
Natalia Biserova (President), Lomonosov Moscow State University, Moscow, Russian Federation http://invert.bio.msu.ru/en/staff-en/33-biserova-en .
Andreas Wanninger (President-elect), Department of Integrative Zoology, University of Vienna, Vienna, Austria.
Gerhard Scholtz (Past-president), Department of Biology, Humboldt-Universität zu Berlin, Germany.
Julia Sigwart (Secretary), School of Biological Sciences, Queen's University Belfast, UK.
Katrine Worsaae (Treasurer), Department of Biology, University of Copenhagen, Copenhagen, Denmark.
Advisory Council:
Ariel Chipman (Israel)
D. Bruce Conn (USA)
Conrad Helm (Germany)
Xiaoya Ma (UK)
Pedro Martinez (Spain)
Ana Riesgo (Spain)
Nadezhda Rimskaya-Korsakova (Russia)
Elected 23-08-2017, Moscow
Former meetings
ICIM 1 (2008) University of Copenhagen, Denmark
ICIM 2 (2011) H
Document 2:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 3:::
This is a list of scientific journals which cover the field of zoology.
A
Acta Entomologica Musei Nationalis Pragae
Acta Zoologica Academiae Scientiarum Hungaricae
Acta Zoologica Bulgarica
Acta Zoológica Mexicana
Acta Zoologica: Morphology and Evolution
African Entomology
African Invertebrates
African Journal of Herpetology
African Zoology
Alces
American Journal of Primatology
Animal Biology, formerly Netherlands Journal of Zoology
Animal Cognition
Arctic
Australian Journal of Zoology
Australian Mammalogy
B
Bulgarian Journal of Agricultural Science
Bulletin of the American Museum of Natural History
C
Canadian Journal of Zoology
Caribbean Herpetology
Central European Journal of Biology
Contributions to Zoology
Copeia
Crustaceana
E
Environmental Biology of Fishes
F
Frontiers in Zoology
H
Herpetological Monographs
I
Integrative and Comparative Biology, formerly American Zoologist
International Journal of Acarology
International Journal of Primatology
J
M
Malacologia
N
North-Western Journal of Zoology
P
Physiological and Biochemical Zoology
R
Raffles Bulletin of Zoology
Rangifer
Russian Journal of Nematology
V
The Veliger
W
Worm Runner's Digest
Z
See also
List of biology journals
List of ornithology journals
List of entomology journals
Lists of academic journals
Zoology-related lists
Document 4:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the invertebrate.
A. robin
B. echidna
C. western rattlesnake
D. dung beetle
Answer:
|
sciq-8213
|
multiple_choice
|
What controls what moves inside and outside the cell?
|
[
"the plasma membrane",
"golgi apparatus",
"mitochondria",
"nucleus"
] |
A
|
Relavent Documents:
Document 0:::
Cell biomechanics a branch of biomechanics that involves single molecules, molecular interactions, or cells as the system of interest. Cells generate and maintain mechanical forces within their environment as a part of their physiology. Cell biomechanics deals with how mRNA, protein production, and gene expression is affected by said environment and with mechanical properties of isolated molecules or interaction of proteins that make up molecular motors.
It is known that minor alterations in mechanical properties of cells can be an indicator of an infected cell. By studying these mechanical properties, greater insight will be gained in regards to disease. Thus, the goal of understanding cell biomechanics is to combine theoretical, experimental, and computational approaches to construct a realistic description of cell mechanical behaviors to provide new insights on the role of mechanics in disease.
History
In the late seventeenth century, English polymath Robert Hooke and Dutch scientist Antonie van Leeuwenhoek looked into ciliate Vorticella with extreme fluid and cellular motion using a simple optical microscope. In 1702 on Christmas day, van Leeuwenhoek described his observations, “In structure these little animals were fashioned like a bell, and at the round opening they made such a stir, that the particles in the water thereabout were set in motion thereby…which sight I found mightily diverting” in a letter. Prior to this, Brownian motion of particles and organelles within living cells had been discovered as well as theories to measure viscosity. However, there were not enough accessible technical tools to perform these accurate experiments at the time. Thus, mechanical properties within cells were only supported qualitatively by observation.
With these new discoveries, the role of mechanical forces within biology was not always naturally accepted. In 1850, English physician William Benjamin Carpenter wrote “many of the actions taking place in the living bod
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
This is a list of articles on biophysics.
0–9
5-HT3 receptor
A
ACCN1
ANO1
AP2 adaptor complex
Aaron Klug
Acid-sensing ion channel
Activating function
Active transport
Adolf Eugen Fick
Afterdepolarization
Aggregate modulus
Aharon Katzir
Alan Lloyd Hodgkin
Alexander Rich
Alexander van Oudenaarden
Allan McLeod Cormack
Alpha-3 beta-4 nicotinic receptor
Alpha-4 beta-2 nicotinic receptor
Alpha-7 nicotinic receptor
Alpha helix
Alwyn Jones (biophysicist)
Amoeboid movement
Andreas Mershin
Andrew Huxley
Animal locomotion
Animal locomotion on the water surface
Anita Goel
Antiporter
Aquaporin 2
Aquaporin 3
Aquaporin 4
Archibald Hill
Ariel Fernandez
Arthropod exoskeleton
Arthropod leg
Avery Gilbert
B
BEST2
BK channel
Bacterial outer membrane
Balance (ability)
Bat
Bat wing development
Bert Sakmann
Bestrophin 1
Biased random walk (biochemistry)
Bioelectrochemical reactor
Bioelectrochemistry
Biofilm
Biological material
Biological membrane
Biomechanics
Biomechanics of sprint running
Biophysical Society
Biophysics
Bird flight
Bird migration
Bisindolylmaleimide
Bleb (cell biology)
Boris Pavlovich Belousov
Brian Matthews (biochemist)
Britton Chance
Brush border
Bulk movement
Document 3:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 4:::
Cell theory has its origins in seventeenth century microscopy observations, but it was nearly two hundred years before a complete cell membrane theory was developed to explain what separates cells from the outside world. By the 19th century it was accepted that some form of semi-permeable barrier must exist around a cell. Studies of the action of anesthetic molecules led to the theory that this barrier might be made of some sort of fat (lipid), but the structure was still unknown. A series of pioneering experiments in 1925 indicated that this barrier membrane consisted of two molecular layers of lipids—a lipid bilayer. New tools over the next few decades confirmed this theory, but controversy remained regarding the role of proteins in the cell membrane. Eventually the fluid mosaic model was composed in which proteins “float” in a fluid lipid bilayer "sea". Although simplistic and incomplete, this model is still widely referenced today.
[It is found in 1838.]]
Early barrier theories
Since the invention of the microscope in the seventeenth century it has been known that plant and animal tissue is composed of cells : the cell was discovered by Robert Hooke. The plant cell wall was easily visible even with these early microscopes but no similar barrier was visible on animal cells, though it stood to reason that one must exist. By the mid 19th century, this question was being actively investigated and Moritz Traube noted that this outer layer must be semipermeable to allow transport of ions. Traube had no direct evidence for the composition of this film, though, and incorrectly asserted that it was formed by an interfacial reaction of the cell protoplasm with the extracellular fluid.
The lipid nature of the cell membrane was first correctly intuited by Georg Hermann Quincke in 1888, who noted that a cell generally forms a spherical shape in water and, when broken in half, forms two smaller spheres. The only other known material to exhibit this behavior was oil. He al
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What controls what moves inside and outside the cell?
A. the plasma membrane
B. golgi apparatus
C. mitochondria
D. nucleus
Answer:
|
|
ai2_arc-519
|
multiple_choice
|
Which best describes how ice cores are important to the study of geologic history?
|
[
"They show unconformities, which signal changes in deposition.",
"They hold index fossils, which are used to date the different ice cores.",
"They contain evidence showing changes in the atmospheric composition over time.",
"They follow the Law of Superposition, which gives reasons for extinctions of species."
] |
C
|
Relavent Documents:
Document 0:::
The Glaciogenic Reservoir Analogue Studies Project (GRASP) is a research group studying the subglacial to proglacial record of Pleistocene glacial events. It is based in the Delft University of Technology.
Introduction to glaciogenic reservoirs
Glaciogenic reservoirs are sedimentary rocks deposited under an ice sheet influence and that are involved into a gas or oil reservoir. The glacial earth system is complex to study. A large amount on past and ongoing scientific programs work(ed) on our cryosphere and generate a lot of debate about its dynamic, sustainability and behavior against climate changes. Past glaciations or ice ages record occurred several times (Timeline of glaciation) along the geological time scale. As they are hundreds of million years old, these ancient glaciations are even more hard to analyse and study. Earth at that time had a different atmosphere composition, the chemistry of the oceans was also different, life evolution on earth had also a great impact on the dynamic of these ice sheets, the continents were in a particular setting, etc. Geologists have a broad idea of all those parameters but glaciologists know that this is the combination of those setting that bring to our current ice-age.
A glacial system is able to produce a very large amount of sediment due to the tremendous erosive forces of ice at its base. Those sediments are particularly coarse-grained (principally sandstones and conglomerates) and produced in consequent volumes . For their good reservoir properties, ancient glacially-related sediments have been targeted by oil industries. They are currently massively exploited in North Africa, in the Arabic peninsula, South Africa, and few small fields are present in Asia, Australia and Northern Europe. The main ice ages concerned are the Late Ordovician glaciation (Hirnantian) and the Permo-Carboniferous glaciations.
Project objectives
Analogy is a usual geologist method, using the present day observations and project/adapt it
Document 1:::
Glacio-geological databases compile data on glacially associated sedimentary deposits and erosional activity from former and current ice-sheets, usually from published peer-reviewed sources. Their purposes are generally directed towards two ends: (Mode 1) compiling information about glacial landforms, which often inform about former ice-flow directions; and (Mode 2) compiling information which dates the absence or presence of ice.
These databases are used for a variety of purposes: (i) as bibliographic tools for researchers; (ii) as the quantitative basis of mapping of landforms or dates of ice presence/absence; and (iii) as quantitative databases which are used to constrain physically based mathematical models of ice-sheets.
Antarctic Ice Sheet: The AGGDB is a Mode 2 glacio-geological database for the Antarctic ice-sheet using information from around 150 published sources, covering glacial activity mainly from the past 30,000 years. It is available online, and aims to be comprehensive to the end of 2007.
British Ice Sheet: BRITICE is a Mode 1 database which aims to map all glacial landforms of Great Britain.
Eurasian Ice Sheet: DATED-1 is a Mode 2 database for the Eurasian ice-sheet. Its sister-project DATED-2 uses the information in DATED-1 to map the retreat of the Eurasian ice-sheet since the Last Glacial Maximum.
See also
Glacial landforms
Sediment
Geology
Ice sheet
Exposure Age Dating
Radio-carbon dating
Document 2:::
A pollen core is a core sample of a medium containing a stratigraphic sequence of pollen. Analysis of the type and frequency of the pollen in each layer is used to study changes in climate or land use using regional vegetation as a proxy. This analysis is conceptually comparable to the study of ice cores.
Methods
Cores are obtained from deposits where pollen is likely to have been trapped. Cores are generally obtained from lacustrine sediments and peat bogs although soil sediments may also be obtained. Degradation of the pollen exine and bioturbation may reduce the quality of the pollen grains and stratigraphy of the core so researchers frequently select locations where the sediments are under anaerobic conditions.
The cores are then subjected to pollen analysis by palynologists who are able to infer the proportions of major plant types from the concentrations of different pollen types found in the cores.
Coring equipment
There are a number of tools used for coring, often with specialized uses:
Core samplers
Glew corer: A gravity corer used for lake surface sediments to capture the water-sediment interface. Similar to the Kajak-Brinkhurst sampler.
Brown corer:
Frozen finger: A tube is placed into the sediment and then filled with liquid nitrogen causing the sediment around the tube to freeze solid, preserving fine scale structure.
Livingstone piston corer: A long metal tube with a piston at the lower end. Once the core tube is at the desired depth the piston is released and the barrel can be pushed downwards into the sediment. Generally used for lake sediments.
Russian: A chamber corer used to sample peats.
Grab samplers
Ekman grab sampler:
Petersen grab:
Ponar grab:
Pollination
Palynology
Document 3:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 4:::
Blood Falls is an outflow of an iron oxide–tainted plume of saltwater, flowing from the tongue of Taylor Glacier onto the ice-covered surface of West Lake Bonney in the Taylor Valley of the McMurdo Dry Valleys in Victoria Land, East Antarctica.
Iron-rich hypersaline water sporadically emerges from small fissures in the ice cascades. The saltwater source is a subglacial pool of unknown size overlain by about of ice several kilometers from its tiny outlet at Blood Falls.
The reddish deposit was found in 1911 by the Australian geologist Thomas Griffith Taylor, who first explored the valley that bears his name. The Antarctica pioneers first attributed the red color to red algae, but later it was proven to be due to iron oxides.
Geochemistry
Poorly soluble hydrous ferric oxides are deposited at the surface of ice after the ferrous ions present in the unfrozen saltwater are oxidized in contact with atmospheric oxygen. The more soluble ferrous ions initially are dissolved in old seawater trapped in an ancient pocket remaining from the Antarctic Ocean when a fjord was isolated by the glacier in its progression during the Miocene period, some 5 million years ago, when the sea level was higher than today.
Unlike most Antarctic glaciers, the Taylor Glacier is not frozen to the bedrock, probably because of the presence of salts concentrated by the crystallization of the ancient seawater imprisoned below it. Salt cryo-concentration occurred in the deep relict seawater when pure ice crystallized and expelled its dissolved salts as it cooled down because of the heat exchange of the captive liquid seawater with the enormous ice mass of the glacier. As a consequence, the trapped seawater was concentrated in brines with a salinity two to three times that of the mean ocean water. A second mechanism sometimes also explaining the formation of hypersaline brines is the water evaporation of surface lakes directly exposed to the very dry polar atmosphere in the McMurdo Dry Valleys. Th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which best describes how ice cores are important to the study of geologic history?
A. They show unconformities, which signal changes in deposition.
B. They hold index fossils, which are used to date the different ice cores.
C. They contain evidence showing changes in the atmospheric composition over time.
D. They follow the Law of Superposition, which gives reasons for extinctions of species.
Answer:
|
|
sciq-10925
|
multiple_choice
|
What does water treatment do to water?
|
[
"increases volume",
"restores bacteria",
"removes unwanted substances",
"adds flavor"
] |
C
|
Relavent Documents:
Document 0:::
Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization, reverse osmosis, carbon filtering, microfiltration, ultrafiltration, ultraviolet oxidation, or electrodeionization. Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt).
Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals.
Parameters of water purity
Purified water is usually produced by the purification of drinking water or ground water. The impurities that may need to be removed are:
inorganic ions (typically monitored as electrical conductivity or resistivity or specific tests)
organic compounds (typically monitored as TOC or by specific tests)
bacteria (monitored by total viable counts or epifluorescence)
endotoxins and nucleases (monitored by LAL or specific enzyme tests)
particulates (typically controlled by filtration)
gases (typically managed by degassing when required)
Purification methods
Distillation
Distilled water is produced by a process of distillation. Distillation involves boiling the water and then condensing the vapor into a clean container, leaving sol
Document 1:::
Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing.
The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric.
All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering.
Water
Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead
Document 2:::
Ultrapure water (UPW), high-purity water or highly purified water (HPW) is water that has been purified to uncommonly stringent specifications. Ultrapure water is a term commonly used in manufacturing to emphasize the fact that the water is treated to the highest levels of purity for all contaminant types, including: organic and inorganic compounds; dissolved and particulate matter; volatile and non-volatile; reactive, and inert; hydrophilic and hydrophobic; and dissolved gases.
UPW and the commonly used term deionized (DI) water are not the same. In addition to the fact that UPW has organic particles and dissolved gases removed, a typical UPW system has three stages: a pretreatment stage to produce purified water, a primary stage to further purify the water, and a polishing stage, the most expensive part of the treatment process.
A number of organizations and groups develop and publish standards associated with the production of UPW. For microelectronics and power, they include Semiconductor Equipment and Materials International (SEMI) (microelectronics and photovoltaic), American Society for Testing and Materials International (ASTM International) (semiconductor, power), Electric Power Research Institute (EPRI) (power), American Society of Mechanical Engineers (ASME) (power), and International Association for the Properties of Water and Steam (IAPWS) (power). Pharmaceutical plants follow water quality standards as developed by pharmacopeias, of which three examples are the United States Pharmacopeia, European Pharmacopeia, and Japanese Pharmacopeia.
The most widely used requirements for UPW quality are documented by ASTM D5127 "Standard Guide for Ultra-Pure Water Used in the Electronics and Semiconductor Industries" and SEMI F63 "Guide for ultrapure water used in semiconductor processing".
Ultra pure water is also used as boiler feedwater in the UK AGR fleet.
Sources and control
Bacteria, particles, organic, and inorganic sources of contamination vary depend
Document 3:::
Groundwater remediation is the process that is used to treat polluted groundwater by removing the pollutants or converting them into harmless products. Groundwater is water present below the ground surface that saturates the pore space in the subsurface. Globally, between 25 per cent and 40 per cent of the world's drinking water is drawn from boreholes and dug wells. Groundwater is also used by farmers to irrigate crops and by industries to produce everyday goods. Most groundwater is clean, but groundwater can become polluted, or contaminated as a result of human activities or as a result of natural conditions.
The many and diverse activities of humans produce innumerable waste materials and by-products. Historically, the disposal of such waste have not been subject to many regulatory controls. Consequently, waste materials have often been disposed of or stored on land surfaces where they percolate into the underlying groundwater. As a result, the contaminated groundwater is unsuitable for use.
Current practices can still impact groundwater, such as the over application of fertilizer or pesticides, spills from industrial operations, infiltration from urban runoff, and leaking from landfills. Using contaminated groundwater causes hazards to public health through poisoning or the spread of disease, and the practice of groundwater remediation has been developed to address these issues. Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Pollutants and contaminants can be removed from groundwater by applying various techniques, thereby bringing the water to a standard that is commensurate with various intended uses.
Techniques
Ground water remediation techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation,
Document 4:::
Membrane bioreactors are combinations of some membrane processes like microfiltration or ultrafiltration with a biological wastewater treatment process, the activated sludge process. These technologies are now widely used for municipal and industrial wastewater treatment. The two basic membrane bioreactor configurations are the submerged membrane bioreactor and the side stream membrane bioreactor. In the submerged configuration, the membrane is located inside the biological reactor and submerged in the wastewater, while in a side stream membrane bioreactor, the membrane is located outside the reactor as an additional step after biological treatment.
Overview
Water scarcity has prompted efforts to reuse waste water once it has been properly treated, known as "water reclamation" (also called wastewater reuse, water reuse, or water recycling). Among the treatment technologies available to reclaim wastewater, membrane processes stand out for their capacity to retain solids and salts and even to disinfect water, producing water suitable for reuse in irrigation and other applications.
A semipermeable membrane is a material that allows the selective flow of certain substances.
In the case of water purification or regeneration, the aim is to allow the water to flow through the membrane whilst retaining undesirable particles on the originating side. By varying the type of membrane, it is possible to get better pollutant retention of different kinds. Some of the required characteristics in a membrane for wastewater treatment are chemical and mechanical resistance for five years of operation and capacity to operate stably over a wide pH range.
There are two main types of membrane materials available on the market: organic-based polymeric membranes and ceramic membranes. Polymeric membranes are the most commonly used materials in water and wastewater treatment. In particular, polyvinylidene difluoride (PVDF) is the most prevalent material due to its long lifetime and chemica
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does water treatment do to water?
A. increases volume
B. restores bacteria
C. removes unwanted substances
D. adds flavor
Answer:
|
|
sciq-5928
|
multiple_choice
|
In what form is most of the earth's freshwater?
|
[
"gas",
"steam",
"liqued",
"frozen"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In what form is most of the earth's freshwater?
A. gas
B. steam
C. liqued
D. frozen
Answer:
|
|
sciq-2372
|
multiple_choice
|
What is made up of bands of cells that contract for movement?
|
[
"vascular tissue",
"cartilage",
"muscle tissue",
"collagen"
] |
C
|
Relavent Documents:
Document 0:::
Vertebrates
Tendon cells, or tenocytes, are elongated fibroblast type cells. The cytoplasm is stretched between the collagen fibres of the tendon. They have a central cell nucleus with a prominent nucleolus. Tendon cells have a well-developed rough endoplasmic reticulum and they are responsible for synthesis and turnover of tendon fibres and ground substance.
Invertebrates
Tendon cells form a connecting epithelial layer between the muscle and shell in molluscs. In gastropods, for example, the retractor muscles connect to the shell via tendon cells. Muscle cells are attached to the collagenous myo-tendon space via hemidesmosomes. The myo-tendon space is then attached to the base of the tendon cells via basal hemidesmosomes, while apical hemidesmosomes, which sit atop microvilli, attach the tendon cells to a thin layer of collagen. This is in turn attached to the shell via organic fibres which insert into the shell. Molluscan tendon cells appear columnar and contain a large basal cell nucleus. The cytoplasm is filled with granular endoplasmic reticulum and sparse golgi. Dense bundles of microfilaments run the length of the cell connecting the basal to the apical hemidesmosomes.
See also
List of human cell types derived from the germ layers
List of distinct cell types in the adult human body
Document 1:::
Stroma () is the part of a tissue or organ with a structural or connective role. It is made up of all the parts without specific functions of the organ - for example, connective tissue, blood vessels, ducts, etc. The other part, the parenchyma, consists of the cells that perform the function of the tissue or organ.
There are multiple ways of classifying tissues: one classification scheme is based on tissue functions and another analyzes their cellular components. Stromal tissue falls into the "functional" class that contributes to the body's support and movement. The cells which make up stroma tissues serve as a matrix in which the other cells are embedded. Stroma is made of various types of stromal cells.
Examples of stroma include:
stroma of iris
stroma of cornea
stroma of ovary
stroma of thyroid gland
stroma of thymus
stroma of bone marrow
lymph node stromal cell
multipotent stromal cell (mesenchymal stem cell)
Structure
Stromal connective tissues are found in the stroma; this tissue belongs to the group connective tissue proper. The function of connective tissue proper is to secure the parenchymal tissue, including blood vessels and nerves of the stroma, and to construct organs and spread mechanical tension to reduce localised stress. Stromal tissue is primarily made of extracellular matrix containing connective tissue cells. Extracellular matrix is primarily composed of ground substance - a porous, hydrated gel, made mainly from proteoglycan aggregates - and connective tissue fibers. There are three types of fibers commonly found within the stroma: collagen type I, elastic, and reticular (collagen type III) fibres.
Cells
Wandering cells - cells that migrate into the tissue from blood stream in response to a variety of stimuli; for example, immune system blood cells causing inflammatory response.
Fixed cells - cells that are permanent inhabitants of the tissue.
Fibroblast - produce and secrete the organic parts of the ground substance and extrace
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
Outline
h1.00: Cytology
h2.00: General histology
H2.00.01.0.00001: Stem cells
H2.00.02.0.00001: Epithelial tissue
H2.00.02.0.01001: Epithelial cell
H2.00.02.0.02001: Surface epithelium
H2.00.02.0.03001: Glandular epithelium
H2.00.03.0.00001: Connective and supportive tissues
H2.00.03.0.01001: Connective tissue cells
H2.00.03.0.02001: Extracellular matrix
H2.00.03.0.03001: Fibres of connective tissues
H2.00.03.1.00001: Connective tissue proper
H2.00.03.1.01001: Ligaments
H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue
H2.00.03.3.00001: Reticular tissue
H2.00.03.4.00001: Adipose tissue
H2.00.03.5.00001: Cartilage tissue
H2.00.03.6.00001: Chondroid tissue
H2.00.03.7.00001: Bone tissue; Osseous tissue
H2.00.04.0.00001: Haemotolymphoid complex
H2.00.04.1.00001: Blood cells
H2.00.04.1.01001: Erythrocyte; Red blood cell
H2.00.04.1.02001: Leucocyte; White blood cell
H2.00.04.1.03001: Platelet; Thrombocyte
H2.00.04.2.00001: Plasma
H2.00.04.3.00001: Blood cell production
H2.00.04.4.00001: Postnatal sites of haematopoiesis
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
Document 4:::
The territorial matrix is the tissue surrounding chondrocytes (cells which produce cartilage) in cartilage. Chondrocytes are inactive cartilage cells, so they don't make cartilage components. The territorial matrix is basophilic (attracts basic compounds and dyes due to its anionic/acidic nature), because there is a higher concentration of proteoglycans, so it will color darker when it's colored and viewed under a microscope. In other words, it stains metachromatically (dyes change color upon binding) due to the presence of proteoglycans (compound molecules composed of proteins and sugars).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is made up of bands of cells that contract for movement?
A. vascular tissue
B. cartilage
C. muscle tissue
D. collagen
Answer:
|
|
sciq-10474
|
multiple_choice
|
The primary output of the basal nuclei is to the thalamus, which relays that output to where?
|
[
"cerebral cortex",
"suffering cortex",
"effect cortex",
"Back cortex"
] |
A
|
Relavent Documents:
Document 0:::
In the human brain, the nucleus basalis, also known as the nucleus basalis of Meynert or nucleus basalis magnocellularis, is a group of neurons located mainly in the substantia innominata of the basal forebrain. Most neurons of the nucleus basalis are rich in the neurotransmitter acetylcholine, and they have widespread projections to the neocortex and other brain structures.
Structure
The nucleus basalis in humans is a somewhat diffuse collection of large cholinergic neurons in the basal forebrain. The main body of the nucleus basalis lies inferior to the anterior commissure and the globus pallidus, and lateral to the anterior hypothalamus in an area known as the substantia innominata. Rostrally, the nucleus basalis is continuous with the cholinergic neurons of the nucleus of the diagonal band of Broca. The nucleus basalis is thought to consist of several subdivisions based on the location of the cells and their projections to other brain regions. Occasional neurons belonging to the nucleus basalis can be found in nearby locations such as the internal laminae of the globus pallidus and the genu of the internal capsule.
Function
The widespread connections of the nucleus basalis with other parts of the brain indicate that it is likely to have an important modulatory influence on brain function. Studies of the firing patterns of nucleus basalis neurons in nonhuman primates indicate that the cells are associated with arousing stimuli, both positive (appetitive) and negative (aversive). There is also evidence that the nucleus basalis promotes sustained attention, and learning and recall in long term memory
Cholinergic neurons of the nucleus basalis have been hypothesized to modulate the ratio of reality and virtual reality components of visual perception. Experimental evidence has shown that normal visual perception has two components. The first (A) is a bottom-up component in which the input to the higher visual cortex (where conscious perception takes place) comes
Document 1:::
The following diagram is provided as an overview of and topical guide to the human nervous system:
Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system.
Evolution of the human nervous system
Evolution of nervous systems
Evolution of human intelligence
Evolution of the human brain
Paleoneurology
Some branches of science that study the human nervous system
Neuroscience
Neurology
Paleoneurology
Central nervous system
The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord.
Spinal cord
Brain
Brain – center of the nervous system.
Outline of the human brain
List of regions of the human brain
Principal regions of the vertebrate brain:
Peripheral nervous system
Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS.
Sensory system
A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception.
List of sensory systems
Sensory neuron
Perception
Visual system
Auditory system
Somatosensory system
Vestibular system
Olfactory system
Taste
Pain
Components of the nervous system
Neuron
I
Document 2:::
The human brain anatomical regions are ordered following standard neuroanatomy hierarchies. Functional, connective, and developmental regions are listed in parentheses where appropriate.
Hindbrain (rhombencephalon)
Myelencephalon
Medulla oblongata
Medullary pyramids
Arcuate nucleus
Olivary body
Inferior olivary nucleus
Rostral ventrolateral medulla
Caudal ventrolateral medulla
Solitary nucleus (Nucleus of the solitary tract)
Respiratory center-Respiratory groups
Dorsal respiratory group
Ventral respiratory group or Apneustic centre
Pre-Bötzinger complex
Botzinger complex
Retrotrapezoid nucleus
Nucleus retrofacialis
Nucleus retroambiguus
Nucleus para-ambiguus
Paramedian reticular nucleus
Gigantocellular reticular nucleus
Parafacial zone
Cuneate nucleus
Gracile nucleus
Perihypoglossal nuclei
Intercalated nucleus
Prepositus nucleus
Sublingual nucleus
Area postrema
Medullary cranial nerve nuclei
Inferior salivatory nucleus
Nucleus ambiguus
Dorsal nucleus of vagus nerve
Hypoglossal nucleus
Chemoreceptor trigger zone
Metencephalon
Pons
Pontine nuclei
Pontine cranial nerve nuclei
Chief or pontine nucleus of the trigeminal nerve sensory nucleus (V)
Motor nucleus for the trigeminal nerve (V)
Abducens nucleus (VI)
Facial nerve nucleus (VII)
Vestibulocochlear nuclei (vestibular nuclei and cochlear nuclei) (VIII)
Superior salivatory nucleus
Pontine tegmentum
Pontine micturition center (Barrington's nucleus)
Locus coeruleus
Pedunculopontine nucleus
Laterodorsal tegmental nucleus
Tegmental pontine reticular nucleus
Nucleus incertus
Parabrachial area
Medial parabrachial nucleus
Lateral parabrachial nucleus
Subparabrachial nucleus (Kölliker-Fuse nucleus)
Pontine respiratory group
Superior olivary complex
Medial superior olive
Lateral superior olive
Medial nucleus of the trapezoid body
Paramedian pontine reticular formation
Parvocellular reticular nucleus
Caudal pontine reticular nucleus
Cerebellar peduncles
Superior cerebellar peduncle
Middle cerebellar peduncle
Inferior
Document 3:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
Document 4:::
In anatomy and zoology, the cortex (: cortices) is the outermost (or superficial) layer of an organ. Organs with well-defined cortical layers include kidneys, adrenal glands, ovaries, the thymus, and portions of the brain, including the cerebral cortex, the best-known of all cortices.
Etymology
The word is of Latin origin and means bark, rind, shell or husk.
Notable examples
The renal cortex, between the renal capsule and the renal medulla; assists in ultrafiltration
The adrenal cortex, situated along the perimeter of the adrenal gland; mediates the stress response through the production of various hormones
The thymic cortex, mainly composed of lymphocytes; functions as a site for somatic recombination of T cell receptors, and positive selection
The cerebral cortex, the outer layer of the cerebrum, plays a key role in memory, attention, perceptual awareness, thought, language, and consciousness.
Cortical bone is the hard outer layer of bone; distinct from the spongy, inner cancellous bone tissue
Ovarian cortex is the outer layer of the ovary and contains the follicles.
The lymph node cortex is the outer layer of the lymph node.
Cerebral cortex
The cerebral cortex is typically described as comprising three parts: the sensory, motor, and association areas. These sensory areas receive and process information from the senses. The senses of vision, audition, and touch are served by the primary visual cortex, the primary auditory cortex, and primary somatosensory cortex. The cerebellar cortex is the thin gray surface layer of the cerebellum, consisting of an outer molecular layer or stratum moleculare, a single layer of Purkinje cells (the ganglionic layer), and an inner granular layer or stratum granulosum. The cortex is the outer surface of the cerebrum and is composed of gray matter.
The motor areas are located in both hemispheres of the cerebral cortex. Two areas of the cortex are commonly referred to as motor: the primary motor cortex, which executes v
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The primary output of the basal nuclei is to the thalamus, which relays that output to where?
A. cerebral cortex
B. suffering cortex
C. effect cortex
D. Back cortex
Answer:
|
|
sciq-5036
|
multiple_choice
|
The sun’s heat can also be trapped in your home by using south facing windows and good what?
|
[
"insulation",
"vegetation",
"floors",
"curtains"
] |
A
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
Shading coefficient (SC) is a measure of thermal performance of a glass unit (panel or window) in a building.
It is the ratio of solar gain (due to direct sunlight) passing through a glass unit to the solar energy which passes through 3mm Clear Float Glass. It is an indicator of how well the glass is thermally insulating (shading) the interior when there is direct sunlight on the panel or window.
The shading coefficient depends on the color of glass and degree of reflectivity. It also depends on the type of reflective metal oxides for the case of reflective glass. Sputter-coated reflective and/or sputter-coated low-emissivity glasses tend to have lower SC compared to the same pyrolitically-coated reflective and/or low-emissivity glass.
The value ranges between 1.00 to 0.00, but experiments show that the value of the SC is typically between 0.98~0.10. The lower the rating, the less solar heat is transmitted through the glass, and the greater its shading ability.
Solar properties play a significant role in the selection of glass, especially in regions or cardinal directions with high solar exposure. It becomes less significant in situations where direct sunlight is not a major factor (e.g., windows completely shaded by overhangs).
Window design methods have moved away from Shading Coefficient to Solar Heat Gain Coefficient (SHGC), which is defined as the fraction of incident solar radiation that actually enters a building through the entire window assembly as heat gain (not just the glass portion). Though shading coefficient is still mentioned in manufacturer product literature and some industry computer software, it is no longer mentioned as an option in the handbook widely used by building energy engineers or model building codes. Industry technical experts recognized the limitations of SC and pushed towards SHGC before the early 1990s.
A conversion from SC to SHGC is not necessarily straightforward, as they each take into account different heat transfer mecha
Document 3:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The sun’s heat can also be trapped in your home by using south facing windows and good what?
A. insulation
B. vegetation
C. floors
D. curtains
Answer:
|
|
sciq-5397
|
multiple_choice
|
What do you call large, y-shaped proteins that recognize and bind to antigens?
|
[
"membranes",
"proteins",
"parasites",
"antibodies"
] |
D
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 2:::
Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane.
Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct conformation of the protein in isolation from its native environment.
Function
Membrane proteins perform a variety of functions vital to the survival of organisms:
Membrane receptor proteins relay signals between the cell's internal and external environments.
Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database.
Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase.
Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response
The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences.
Integral membrane proteins
Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer:
Integral polytopic proteins are transmembran
Document 3:::
Cell surface receptors (membrane receptors, transmembrane receptors) are receptors that are embedded in the plasma membrane of cells. They act in cell signaling by receiving (binding to) extracellular molecules. They are specialized integral membrane proteins that allow communication between the cell and the extracellular space. The extracellular molecules may be hormones, neurotransmitters, cytokines, growth factors, cell adhesion molecules, or nutrients; they react with the receptor to induce changes in the metabolism and activity of a cell. In the process of signal transduction, ligand binding affects a cascading chemical change through the cell membrane.
Structure and mechanism
Many membrane receptors are transmembrane proteins. There are various kinds, including glycoproteins and lipoproteins. Hundreds of different receptors are known and many more have yet to be studied. Transmembrane receptors are typically classified based on their tertiary (three-dimensional) structure. If the three-dimensional structure is unknown, they can be classified based on membrane topology. In the simplest receptors, polypeptide chains cross the lipid bilayer once, while others, such as the G-protein coupled receptors, cross as many as seven times. Each cell membrane can have several kinds of membrane receptors, with varying surface distributions. A single receptor may also be differently distributed at different membrane positions, depending on the sort of membrane and cellular function. Receptors are often clustered on the membrane surface, rather than evenly distributed.
Mechanism
Two models have been proposed to explain transmembrane receptors' mechanism of action.
Dimerization: The dimerization model suggests that prior to ligand binding, receptors exist in a monomeric form. When agonist binding occurs, the monomers combine to form an active dimer.
Rotation: Ligand binding to the extracellular part of the receptor induces a rotation (conformational change) of part of th
Document 4:::
In immunology, a linear epitope (also sequential epitope) is an epitope—a binding site on an antigen—that is recognized by antibodies by its linear sequence of amino acids (i.e. primary structure). In contrast, most antibodies recognize a conformational epitope that has a specific three-dimensional shape (tertiary structure).
An antigen is any substance that the immune system can recognize as being foreign and which provokes an immune response. Since antigens are usually proteins that are too large to bind as a whole to any receptor, only specific segments that form the antigen bind with a specific antibody. Such segments are called epitopes. Likewise, it is only the paratope of the antibody that comes in contact with the epitope.
Proteins are composed of repeating nitrogen-containing subunits called amino acids. The linear sequence of amino acids that compose a protein is called its primary structure, which typically does not present as simple line of sequential proteins (much like a knot, rather than a straight string). But, when an antigen is broken down in a lysosome, it yields small peptides, which can be recognized through the amino acids that lie continuously in a line, and hence are called linear epitopes.
Significance
While performing molecular assays involving use of antibodies such as in the Western blot, immunohistochemistry, and ELISA, one should carefully choose antibodies that recognize linear or conformational epitopes.
For instance, if a protein sample is boiled, treated with beta-mercaptoethanol, and run in SDS-PAGE for the Western blot, the proteins are essentially denatured and therefore cannot assume their natural three-dimensional conformations. Therefore, antibodies that recognize linear epitopes instead of conformational epitopes are chosen for immunodetection. In contrast, in immunohistochemistry where protein structure is preserved, antibodies that recognize conformational epitopes are preferred.
See also
Conformational epitope
Pol
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call large, y-shaped proteins that recognize and bind to antigens?
A. membranes
B. proteins
C. parasites
D. antibodies
Answer:
|
|
sciq-3013
|
multiple_choice
|
What common code do all known living organisms use?
|
[
"biochemical",
"Morse code",
"genetic",
"code of ethics"
] |
C
|
Relavent Documents:
Document 0:::
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map".
Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form.
History
The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-char
Document 1:::
This is a list of algebraic coding theory topics.
Algebraic coding theory
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
In the social sciences, coding is an analytical process in which data, in both quantitative form (such as questionnaires results) or qualitative form (such as interview transcripts) are categorized to facilitate analysis.
One purpose of coding is to transform the data into a form suitable for computer-aided analysis. This categorization of information is an important step, for example, in preparing data for computer processing with statistical software. Prior to coding, an annotation scheme is defined. It consists of codes or tags. During coding, coders manually add codes into data where required features are identified. The coding scheme ensures that the codes are added consistently across the data set and allows for verification of previously tagged data.
Some studies will employ multiple coders working independently on the same data. This also minimizes the chance of errors from coding and is believed to increase the reliability of data.
Directive
One code should apply to only one category and categories should be comprehensive. There should be clear guidelines for coders (individuals who do the coding) so that code is consistent.
Quantitative approach
For quantitative analysis, data is coded usually into measured and recorded as nominal or ordinal variables.
Questionnaire data can be pre-coded (process of assigning codes to expected answers on designed questionnaire), field-coded (process of assigning codes as soon as data is available, usually during fieldwork), post-coded (coding of open questions on completed questionnaires) or office-coded (done after fieldwork). Note that some of the above are not mutually exclusive.
In social sciences, spreadsheets such as Excel and more advanced software packages such as R, Matlab, PSPP/SPSS, DAP/SAS, MiniTab and Stata are often used.
Qualitative approach
For disciplines in which a qualitative format is preferential, including ethnography, humanistic geography or phenomenological psychology a varied approach to co
Document 4:::
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.
The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish.
One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
Theory
In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.
Before giving a mathematically precise definition, this is a brief example. The mapping
is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What common code do all known living organisms use?
A. biochemical
B. Morse code
C. genetic
D. code of ethics
Answer:
|
|
scienceQA-12678
|
multiple_choice
|
Which organ is a muscular tube that moves food from the mouth to the stomach?
|
[
"small intestine",
"large intestine",
"heart",
"esophagus"
] |
D
|
Relavent Documents:
Document 0:::
The esophagus (American English) or oesophagus (British English, see spelling differences; both ; : (o)esophagi or (o)esophaguses), colloquially known also as the food pipe or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, “I carry”) + ἔφαγον (éphagon, “I ate”).
The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle.
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
Document 1:::
The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used.
Structure
It usually has two layers of smooth muscle:
inner and "circular"
outer and "longitudinal"
However, there are some exceptions to this pattern.
In the stomach there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer.
In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle.
In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal.
In the ureter the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer.
The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract:
in the pylorus of the stomach, it forms the pyloric sphincter.
in the anal canal, it forms the internal anal sphincter.
In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli.
The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs.
Function
The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis.
Document 2:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
Document 3:::
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
The esophagus may be affected by gastric reflux, cancer, prominent dilated blood vessels called varices that can bleed heavily, t
Document 4:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which organ is a muscular tube that moves food from the mouth to the stomach?
A. small intestine
B. large intestine
C. heart
D. esophagus
Answer:
|
|
sciq-9737
|
multiple_choice
|
Which type of plankton make food via photosynthesis?
|
[
"common plankton",
"dinoflagellates",
"zooplankton",
"phytoplankton"
] |
D
|
Relavent Documents:
Document 0:::
Zooplankton are the animal component of the planktonic community (the "zoo-" prefix comes from ). Plankton are aquatic organisms that are unable to swim effectively against currents. Consequently, they drift or are carried along by currents in the ocean, or by currents in seas, lakes or rivers.
Zooplankton can be contrasted with phytoplankton, which are the plant component of the plankton community (the "phyto-" prefix comes from ). Zooplankton are heterotrophic (other-feeding), whereas phytoplankton are autotrophic (self-feeding). In other words, zooplankton cannot manufacture their own food. Rather, they must eat plants or other animals instead. In particular, they eat phytoplankton, which are generally smaller than zooplankton. Most zooplankton are microscopic but some (such as jellyfish) are macroscopic, meaning they can be seen with the naked eye.
Many protozoans (single-celled protists that prey on other microscopic life) are zooplankton, including zooflagellates, foraminiferans, radiolarians, some dinoflagellates and marine microanimals. Macroscopic zooplankton include pelagic cnidarians, ctenophores, molluscs, arthropods and tunicates, as well as planktonic arrow worms and bristle worms.
The distinction between plants and animals often breaks down in very small organisms. Recent studies of marine microplankton have indicated over half of microscopic plankton are mixotrophs. A mixotroph is an organism that can behave sometimes as though it were a plant and sometimes as though it were an animal, using a mix of autotrophy and heterotrophy. Many marine microzooplankton are mixotrophic, which means they could also be classified as phytoplankton.
Overview
Zooplankton (; ) are heterotrophic (sometimes detritivorous) plankton. The word zooplankton is derived from ; and .
Zooplankton is a categorization spanning a range of organism sizes including small protozoans and large metazoans. It includes holoplanktonic organisms whose complete life cycle lies within t
Document 1:::
Picoplankton is the fraction of plankton composed by cells between 0.2 and 2 μm that can be either prokaryotic and eukaryotic phototrophs and heterotrophs:
photosynthetic
heterotrophic
They are prevalent amongst microbial plankton communities of both freshwater and marine ecosystems. They have an important role in making up a significant portion of the total biomass of phytoplankton communities.
Classification
In general, plankton can be categorized on the basis of physiological, taxonomic, or dimensional characteristics. Subsequently, a generic classification of a plankton includes:
Bacterioplankton
Phytoplankton
Zooplankton
However, there is a simpler scheme that categorizes plankton based on a logarithmic size scale:
Macroplankton (200–2000 μm)
Micro-plankton (20–200 μm)
Nanoplankton (2–20 μm)
This was even further expanded to include picoplankton (0.2–2 μm) and fem-toplankton (0.02–0.2 μm), as well as net plankton, ultraplankton. Now that picoplankton have been characterized, they have their own further subdivisions such as prokaryotic and eukaryotic phototrophs and heterotrophs that are spread throughout the world in various types of lakes and tropic states.
In order to differentiate between autotrophic picoplankton and heterotrophic picoplankton, the autotrophs could have photosynthetic pigments and the ability to show autofluorescence, which would allow for their enumeration under epifluorescence microscopy. This is how minute eukaryotes first became known.
Overall, picoplankton play an essential role in oligotrophic dimicitc lakes because they are able to produce and then accordingly recycle dissolved organic matter (DOM) in a very efficient manner under circumstance when competition of other phytoplankters is disturbed by factors such as limiting nutrients and predators. Picoplankton are responsible for the most primary productivity in oligotrophic gyres, and are distinguished from nanoplankton and microplankton. Because they are small, t
Document 2:::
List of eukaryotic species that belong to picoplankton, meaning one of their cell dimensions is smaller than 3 μm.
Autotrophic species
Chlorophyta
Chlorophyceae
Stichococcus cylindricus Butcher, 3 – 4.5 μm, brackish
Pedinophyceae
Marsupiomonas pelliculata Jones et al., 3 – 3 μm, brackish-marine
Resultor micron Moestrup, 1.5 – 2.5 μm, marine
Prasinophyceae
Bathycoccus prasinos Eikrem et Throndsen, 1.5 – 2.5 μm, marine
Crustomastix stigmatica Zingone, 3 – 5 μm, marine
Dolichomastix lepidota Manton, 2.5 – 2.5 μm, marine
Dolichomastix eurylepidea Manton, 3 μm, marine
Dolichomastix tenuilepis Throndsen et Zingone, 3 – 4.5 μm, marine
Mantoniella squamata Desikachary, 3 – 5 μm, marine
Micromonas pusilla Manton et Parke, 1 – 3 μm, marine
Ostreococcus tauri Courties et Chrétiennot-Dinet, 0.8 – 1.1 μm, marine
Picocystis salinarum Lewin, 2 – 3 μm, hypersaline
Prasinococcus capsulatus Miyashita et Chihara, 3 – 5.5 μm, marine
Prasinoderma coloniale Hasegawa et Chihara, 2.5 – 5.5 μm, marine
Pseudoscourfieldia marina Manton, 3 – 3.5 μm, marine
Pycnococcus provasolii Guillard, 1.5 – 4 μm, marine
Pyramimonas virginica Pennick, 2.7 – 3.5 μm, marine
Document 3:::
Cyanobacteria (), also called Cyanobacteriota or Cyanophyta, are a phylum of gram-negative bacteria that obtain energy via photosynthesis. The name cyanobacteria refers to their color (), which similarly forms the basis of cyanobacteria's common name, blue-green algae, although they are not usually scientifically classified as algae. They appear to have originated in a freshwater or terrestrial environment. Sericytochromatia, the proposed name of the paraphyletic and most basal group, is the ancestor of both the non-photosynthetic group Melainabacteria and the photosynthetic cyanobacteria, also called Oxyphotobacteria.
Cyanobacteria use photosynthetic pigments, such as carotenoids, phycobilins, and various forms of chlorophyll, which absorb energy from light. Unlike heterotrophic prokaryotes, cyanobacteria have internal membranes. These are flattened sacs called thylakoids where photosynthesis is performed. Phototrophic eukaryotes such as green plants perform photosynthesis in plastids that are thought to have their ancestry in cyanobacteria, acquired long ago via a process called endosymbiosis. These endosymbiotic cyanobacteria in eukaryotes then evolved and differentiated into specialized organelles such as chloroplasts, chromoplasts, etioplasts, and leucoplasts, collectively known as plastids.
Cyanobacteria are the first organisms known to have produced oxygen. By producing and releasing oxygen as a byproduct of photosynthesis, cyanobacteria are thought to have converted the early oxygen-poor, reducing atmosphere into an oxidizing one, causing the Great Oxidation Event and the "rusting of the Earth", which dramatically changed the composition of life forms on Earth.
The cyanobacteria Synechocystis and Cyanothece are important model organisms with potential applications in biotechnology for bioethanol production, food colorings, as a source of human and animal food, dietary supplements and raw materials. Cyanobacteria produce a range of toxins known as cyanotox
Document 4:::
Photosynthetic picoplankton or picophytoplankton is the fraction of the phytoplankton performing photosynthesis composed of cells between 0.2 and 2 µm in size (picoplankton). It is especially important in the central oligotrophic regions of the world oceans that have very low concentration of nutrients.
History
1952: Description of the first truly picoplanktonic species, Chromulina pusilla, by Butcher. This species was renamed in 1960 to Micromonas pusilla and a few studies have found it to be abundant in temperate oceanic waters, although very little such quantification data exists for eukaryotic picophytoplankton.
1979: Discovery of marine Synechococcus by Waterbury and confirmation with electron microscopy by Johnson and Sieburth.
1982: The same Johnson and Sieburth demonstrate the importance of small eukaryotes by electron microscopy.
1983: W.K.W Li and colleagues, including Trevor Platt show that a large fraction of marine primary production is due to organisms smaller than 2 µm.
1986: Discovery of "prochlorophytes" by Chisholm and Olson in the Sargasso Sea, named in 1992 as Prochlorococcus marinus.
1994: Discovery in the Thau lagoon in France of the smallest photosynthetic eukaryote known to date, Ostreococcus tauri, by Courties.
2001: Through sequencing of the ribosomal RNA gene extracted from marine samples, several European teams discover that eukaryotic picoplankton are highly diverse. This finding followed on the first discovery of such eukaryotic diversity in 1998 by Rappe and colleagues at Oregon State University, who were the first to apply rRNA sequencing to eukaryotic plankton in the open-ocean, where they discovered sequences that seemed distant from known phytoplankton The cells containing DNA matching one of these novel sequences were recently visualized and further analyzed using specific probes and found to be broadly distributed.
Methods of study
Because of its very small size, picoplankton is difficult to study by classic methods
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which type of plankton make food via photosynthesis?
A. common plankton
B. dinoflagellates
C. zooplankton
D. phytoplankton
Answer:
|
|
sciq-9099
|
multiple_choice
|
What is a small, spherical compartment separated by at least one lipid layer from the cytosol?
|
[
"capillary",
"vesicle",
"cortex",
"cuticle"
] |
B
|
Relavent Documents:
Document 0:::
Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization.
Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments.
It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built.
Types
In general there are 4 main cellular compartments, they are:
The nuclear compartment comprising the nucleus
The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope)
Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes)
The cytosol
Function
Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the
Document 1:::
In cellular biology, inclusions are diverse intracellular non-living substances (ergastic substances) that are not bound by membranes. Inclusions are stored nutrients/deutoplasmic substances, secretory products, and pigment granules. Examples of inclusions are glycogen granules in the liver and muscle cells, lipid droplets in fat cells, pigment granules in certain cells of skin and hair, and crystals of various types. Cytoplasmic inclusions are an example of a biomolecular condensate arising by liquid-solid, liquid-gel or liquid-liquid phase separation.
These structures were first observed by O. F. Müller in 1786.
Examples
Glycogen: Glycogen is the most common form of glucose in animals and is especially abundant in cells of muscles, and liver. It appears in electron micrograph as clusters, or a rosette of beta particles that resemble ribosomes, located near the smooth endoplasmic reticulum. Glycogen is an important energy source of the cell; therefore, it will be available on demand. The enzymes responsible for glycogenolysis degrade glycogen into individual molecules of glucose and can be utilized by multiple organs of the body.
Lipids: Lipids are triglycerides in storage form is the common form of inclusions, not only are stored in specialized cells (adipocytes) but also are located as individuals droplets in various cell type especially hepatocytes. These are fluid at body temperature and appear in living cells as refractile spherical droplets. Lipid yields more than twice as many calories per gram as does carbohydrate. On demand, they serve as a local store of energy and a potential source of short carbon chains that are used by the cell in its synthesis of membranes and other lipid containing structural components or secretory products.
Crystals: Crystalline inclusions have long been recognized as normal constituents of certain cell types such as Sertoli cells and Leydig cells of the human testis, and occasionally in macrophages. It is believed that th
Document 2:::
Compartmentalized ciliogenesis is the most common type of ciliogenesis where the cilium axoneme is formed separated from the cytoplasm by the ciliary membrane and a ciliary gate known as the transition zone.
Document 3:::
The cilium (: cilia; ), is a membrane-bound organelle found on most types of eukaryotic cell. Cilia are absent in bacteria and archaea. The cilium has the shape of a slender threadlike projection that extends from the surface of the much larger cell body. Eukaryotic flagella found on sperm cells and many protozoans have a similar structure to motile cilia that enables swimming through liquids; they are longer than cilia and have a different undulating motion.
There are two major classes of cilia: motile and non-motile cilia, each with a subtype, giving four types in all. A cell will typically have one primary cilium or many motile cilia. The structure of the cilium core called the axoneme determines the cilium class. Most motile cilia have a central pair of single microtubules surrounded by nine pairs of double microtubules called a 9+2 axoneme. Most non-motile cilia have a 9+0 axoneme that lacks the central pair of microtubules. Also lacking are the associated components that enable motility including the outer and inner dynein arms, and radial spokes. Some motile cilia lack the central pair, and some non-motile cilia have the central pair, hence the four types.
Most non-motile cilia are termed primary cilia or sensory cilia and serve solely as sensory organelles. Most vertebrate cell types possess a single non-motile primary cilium, which functions as a cellular antenna. Olfactory neurons possess a great many non-motile cilia. Non-motile cilia that have a central pair of microtubules are the kinocilia present on hair cells.
Motile cilia are found in large numbers on respiratory epithelial cells – around 200 cilia per cell, where they function in mucociliary clearance, and also have mechanosensory and chemosensory functions. Motile cilia on ependymal cells move the cerebrospinal fluid through the ventricular system of the brain. Motile cilia are also present in the oviducts (fallopian tubes) of female (therian) mammals where they function in moving the egg cell
Document 4:::
Membranelles (also membranellae) are structures found around the mouth, or cytostome, in ciliates. They are typically arranged in series, to form an "adoral zone of membranelles", or AZM, on the left side of the buccal cavity (peristome). The membranelles are made up of kinetosomes arranged in groups to make up polykinetids. The cilia which emerge from these structures appear to be fused and to function as a single membrane, which can be used to sweep particles of food into the cytostome, or for locomotion.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a small, spherical compartment separated by at least one lipid layer from the cytosol?
A. capillary
B. vesicle
C. cortex
D. cuticle
Answer:
|
|
sciq-1615
|
multiple_choice
|
Glutathione is a low-molecular-weight compound found in living cells that is produced naturally by what?
|
[
"blood",
"liver",
"brain",
"amino acids"
] |
B
|
Relavent Documents:
Document 0:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 1:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 2:::
β-Leucine (beta-leucine) is a beta amino acid and positional isomer of -leucine which is naturally produced in humans via the metabolism of -leucine by the enzyme leucine 2,3-aminomutase. In cobalamin (vitamin B12) deficient individuals, plasma concentrations of β-leucine are elevated.
Biosynthesis and metabolism in humans
A small fraction of metabolism – less than 5% in all tissues except the testes where it accounts for about 33% – is initially catalyzed by leucine aminomutase, producing β-leucine, which is subsequently metabolized into (β-KIC), β-ketoisocaproyl-CoA, and then acetyl-CoA by a series of uncharacterized enzymes.
Document 3:::
The following outline is provided as an overview of and topical guide to biochemistry:
Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes.
Applications of biochemistry
Testing
Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans.
Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates.
Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population.
Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida.
PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation.
Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin.
Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old
Document 4:::
Leucine (symbol Leu or L) is an essential amino acid that is used in the biosynthesis of proteins. Leucine is an α-amino acid, meaning it contains an α-amino group (which is in the protonated −NH3+ form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a side chain isobutyl group, making it a non-polar aliphatic amino acid. It is essential in humans, meaning the body cannot synthesize it: it must be obtained from the diet. Human dietary sources are foods that contain protein, such as meats, dairy products, soy products, and beans and other legumes. It is encoded by the codons UUA, UUG, CUU, CUC, CUA, and CUG.
Like valine and isoleucine, leucine is a branched-chain amino acid. The primary metabolic end products of leucine metabolism are acetyl-CoA and acetoacetate; consequently, it is one of the two exclusively ketogenic amino acids, with lysine being the other. It is the most important ketogenic amino acid in humans.
Leucine and β-hydroxy β-methylbutyric acid, a minor leucine metabolite, exhibit pharmacological activity in humans and have been demonstrated to promote protein biosynthesis via the phosphorylation of the mechanistic target of rapamycin (mTOR).
Dietary leucine
As a food additive, L-leucine has E number E641 and is classified as a flavor enhancer.
Requirements
The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For leucine, for adults 19 years and older, 42 mg/kg body weight/day.
Sources
Health effects
As a dietary supplement, leucine has been found to slow the degradation of muscle tissue by increasing the synthesis of muscle proteins in aged rats. However, results of comparative studies are conflicted. Long-term leucine supplementation does not increase muscle mass or strength in healthy elderly men. More studies are needed, preferably ones based on an objective, random sa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Glutathione is a low-molecular-weight compound found in living cells that is produced naturally by what?
A. blood
B. liver
C. brain
D. amino acids
Answer:
|
|
sciq-9739
|
multiple_choice
|
Damages and deaths are directly affected by what in an earthquake?
|
[
"shaking",
"structures",
"construction",
"natural"
] |
C
|
Relavent Documents:
Document 0:::
The Human-Induced Earthquake Database (HiQuake) is an online database that documents all reported cases of induced seismicity proposed on scientific grounds. It is the most complete compilation of its kind and is freely available to download via the associated website. The database is periodically updated to correct errors, revise existing entries, and add new entries reported in new scientific papers and reports. Suggestions for revisions and new entries can be made via the associated website.
History
In 2016, Nederlandse Aardolie Maatschappij funded a team of researchers from Durham University and Newcastle University to conduct a full review of induced seismicity. This review formed part of a scientific workshop aimed at estimating the maximum possible magnitude earthquake that might be induced by conventional gas production in the Groningen gas field.
The resulting database from the review was publicly released online on the 26 January 2017. The database was accompanied by the publication of two scientific papers, the more detailed of which is freely available online.
Document 1:::
Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment is defined by the equation
, where
is the shear modulus of the rocks involved in the earthquake (in pascals (Pa), i.e. newtons per square meter)
is the area of the rupture along the geologic fault where the earthquake occurred (in square meters), and
is the average slip (displacement offset between the two sides of the fault) on (in meters).
thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative.
The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip.
Seismic moment is the basis of the moment magnitude scale introduced by Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes.
The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is
See also
Richter magnitude scale
Moment magnitude scale
Sources
.
.
.
.
Seismology measurement
Moment (physics)
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) was created by the National Science Foundation (NSF) to improve infrastructure design and construction practices to prevent or minimize damage during an earthquake or tsunami. Its headquarters were at Purdue University in West Lafayette, Indiana as part of cooperative agreement #CMMI-0927178, and it ran from 2009 till 2014. The mission of NEES is to accelerate improvements in seismic design and performance by serving as a collaboratory for discovery and innovation.
Description
The NEES network features 14 geographically distributed, shared-use laboratories that support several types of experimental work: geotechnical centrifuge research, shake table tests, large-scale structural testing, tsunami wave basin experiments, and field site research. Participating universities include: Cornell University; Lehigh University;Oregon State University; Rensselaer Polytechnic Institute; University at Buffalo, SUNY; University of California, Berkeley; University of California, Davis; University of California, Los Angeles; University of California, San Diego; University of California, Santa Barbara; University of Illinois at Urbana-Champaign; University of Minnesota; University of Nevada, Reno; and the University of Texas, Austin.
The equipment sites (labs) and a central data repository are connected to the global earthquake engineering community via the NEEShub, which is powered by the HUBzero software developed at Purdue University specifically to help the scientific community share resources and collaborate. The cyberinfrastructure, connected via Internet2, provides interactive simulation tools, a simulation tool development area, a curated central data repository, user-developed databases, animated presentations, user support, telepresence, mechanism for uploading and sharing resources and statistics about users, and usage patterns.
This allows researchers to: securely store, organize and share da
Document 4:::
Temblor, Inc. is a tech company that provides information about earthquakes and enables users to both see what the seismic hazard is at their home, and learn about precautions to help lessen the risk. The company released a web app and a mobile app that can be found in the iTunes and Google Play stores. Temblor was founded in 2014 by Ross Stein and Volkan Sevilgen, both coming from the United States Geological Survey. Together they have published research on earthquakes that have struck in 30 countries. Upon release, Temblor has been mentioned in several news sources, including the New York Times, CBS News, SFGate, MSN, and the Los Angeles Times, in articles about earthquakes and earthquake preparedness.
Features
Temblor displays a map is with earthquakes and faults. Liquefaction and landslide data is also shown in several locations. Users are able to plug in an address and get the seismic hazard rank for that location along with estimates for seismic shaking and home damage. They are also shown the extent to which these risks could be mitigated by buying earthquake insurance or retrofitting the house. These features are provided for free and without ads for the general public.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Damages and deaths are directly affected by what in an earthquake?
A. shaking
B. structures
C. construction
D. natural
Answer:
|
|
sciq-2272
|
multiple_choice
|
German doctor rudolf virchow first discovered what process when studying living cells under a microscope?
|
[
"evolution",
"photosynthesis",
"cell division",
"radiation"
] |
C
|
Relavent Documents:
Document 0:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 1:::
The historical application of biotechnology throughout time is provided below in chronological order.
These discoveries, inventions and modifications are evidence of the application of biotechnology since before the common era and describe notable events in the research, development and regulation of biotechnology.
Before Common Era
5000 BCE – Chinese discover fermentation through beer making.
6000 BCE – Yogurt and cheese made with lactic acid-producing bacteria by various people.
4500 BCE – Egyptians bake leavened bread using yeast.
500 BCE – Moldy soybean curds used as an antibiotic.
300 BCE – The Greeks practice crop rotation for maximum soil fertility.
100 AD – Chinese use chrysanthemum as a natural insecticide.
Pre-20th century
1663 – First recorded description of living cells by Robert Hooke.
1677 – Antonie van Leeuwenhoek discovers and describes bacteria and protozoa.
1798 – Edward Jenner uses first viral vaccine to inoculate a child from smallpox.
1802 – The first recorded use of the word biology.
1824 – Henri Dutrochet discovers that tissues are composed of living cells.
1838 – Protein discovered, named and recorded by Gerardus Johannes Mulder and Jöns Jacob Berzelius.
1862 – Louis Pasteur discovers the bacterial origin of fermentation.
1863 – Gregor Mendel discovers the laws of inheritance.
1864 – invents first centrifuge to separate cream from milk.
1869 – Friedrich Miescher identifies DNA in the sperm of a trout.
1871 – Felix Hoppe-Seyler discovers invertase, which is still used for making artificial sweeteners.
1877 – Robert Koch develops a technique for staining bacteria for identification.
1878 – Walther Flemming discovers chromatin leading to the discovery of chromosomes.
1881 – Louis Pasteur develops vaccines against bacteria that cause cholera and anthrax in chickens.
1885 – Louis Pasteur and Emile Roux develop the first rabies vaccine and use it on Joseph Meister.
20th century
1919 – Károly Ereky, a Hungarian
Document 2:::
The online project Virtual Laboratory. Essays and Resources on the Experimentalization of Life, 1830-1930, located at the Max Planck Institute for the History of Science, is dedicated to research in the history of the experimentalization of life. The term experimentalization describes the interaction between the life sciences, the arts, architecture, media and technology within the experimental paradigm, ca. 1830 to 1930. The Virtual Laboratory is a platform that not only presents work on this topic but also acts as a research environment for new studies.
History
In 1977, the first version of the Virtual Laboratory was presented, titled Virtual Laboratory of Physiology. At this time, the main focus lay on the development of technological preconditions of physiological research in the 19th century. Therefore, a database with relevant texts and images was created. In 1998, the concept still used today was created after a series of modifications, followed by the publication of a cd-ROM in 1999. At this time, the focus had been expanded from physiology to the life sciences in general, as well as the arts and literature. As the project had been extended from a sole database to a platform for historiographical research, it was presented at the conference Using the World Wide Web for Historical Research in Science and Technology organized by the Alfred P. Sloan Foundation at Stanford University. In 2000, the project was incorporated into the research project The Experimentalization of Life, funded by the Volkswagen Foundation. This was followed by another presentation at the conference Virtual Research? The impact of new technologies on scientific practices at the ETH Zurich. In 2002, the first version of the Virtual Laboratory went online. Since 2008, the Virtual Laboratory is listed as a journal under the ISSN number 1866-4784.
Structure
The Virtual Laboratory consists of two parts: The archive holds a large number of digitized texts and images as well as data sheets c
Document 3:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 4:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
German doctor rudolf virchow first discovered what process when studying living cells under a microscope?
A. evolution
B. photosynthesis
C. cell division
D. radiation
Answer:
|
|
sciq-10536
|
multiple_choice
|
In the stomach, which material's arrival triggers churning and the release of gastric juices?
|
[
"food",
"acid",
"hair",
"bile"
] |
A
|
Relavent Documents:
Document 0:::
Satiety (/səˈtiːəti/ sə-TEE-ə-tee) is a state or condition of fullness gratified beyond the point of satisfaction, the opposite of hunger. Following satiation (meal termination), satiety is a feeling of fullness lasting until the next meal. When food is present in the GI tract after a meal, satiety signals overrule hunger signals, but satiety slowly fades as hunger increases.
The satiety center in animals is located in ventromedial nucleus of the hypothalamus.
Mechanism
Satiety is signaled through the vagus nerve as well as circulating hormones. During intake of a meal, the stomach must stretch to accommodate this increased volume. This gastric accommodation activates stretch receptors in the proximal (upper) portion of the stomach. These receptors then signal through afferent vagus nerve fibers to the hypothalamus, increasing satiety.
Signalling factors
In addition, as the food moves into the duodenum, duodenal cells release multiple substances that affect digestion and satiety. Glucagon-like peptide-1 (GLP-1) is an incretin released by the duodenum that inhibits relaxation of the stomach. This inhibition causes increased stretch of the stomach, increasing activation of proximal gastric stretch receptors. It also slows overall gut motility, increasing the duration of satiety. This effect is used to increase weight loss and treat obesity through GLP-1 agonists. Cholecystokinin (CCK) is gut peptide produced by the duodenum in response to fat and proteins. CCK has the effect of slowing gut motility and increasing satiety as well as activating release of pancreatic digestive enzymes and bile from the gallbladder.
See also
Satiety value
Prader–Willi syndrome
Document 1:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 2:::
Gastrointestinal physiology is the branch of human physiology that addresses the physical function of the gastrointestinal (GI) tract. The function of the GI tract is to process ingested food by mechanical and chemical means, extract nutrients and excrete waste products. The GI tract is composed of the alimentary canal, that runs from the mouth to the anus, as well as the associated glands, chemicals, hormones, and enzymes that assist in digestion. The major processes that occur in the GI tract are: motility, secretion, regulation, digestion and circulation. The proper function and coordination of these processes are vital for maintaining good health by providing for the effective digestion and uptake of nutrients.
Motility
The gastrointestinal tract generates motility using smooth muscle subunits linked by gap junctions. These subunits fire spontaneously in either a tonic or a phasic fashion. Tonic contractions are those contractions that are maintained from several minutes up to hours at a time. These occur in the sphincters of the tract, as well as in the anterior stomach. The other type of contractions, called phasic contractions, consist of brief periods of both relaxation and contraction, occurring in the posterior stomach and the small intestine, and are carried out by the muscularis externa.
Motility may be overactive (hypermotility), leading to diarrhea or vomiting, or underactive (hypomotility), leading to constipation or vomiting; either may cause abdominal pain.
Stimulation
The stimulation for these contractions likely originates in modified smooth muscle cells called interstitial cells of Cajal. These cells cause spontaneous cycles of slow wave potentials that can cause action potentials in smooth muscle cells. They are associated with the contractile smooth muscle via gap junctions. These slow wave potentials must reach a threshold level for the action potential to occur, whereupon Ca2+ channels on the smooth muscle open and an action potential
Document 3:::
The basal or basic electrical rhythm (BER) or electrical control activity (ECA) is the spontaneous depolarization and repolarization of pacemaker cells known as interstitial cells of Cajal (ICCs) in the smooth muscle of the stomach, small intestine, and large intestine. This electrical rhythm is spread through gap junctions in the smooth muscle of the GI tract. These pacemaker cells, also called the ICCs, control the frequency of contractions in the gastrointestinal tract. The cells can be located in either the circular or longitudinal layer of the smooth muscle in the GI tract; circular for the small and large intestine, longitudinal for the stomach. The frequency of contraction differs at each location in the GI tract beginning with 3 per minute in the stomach, then 12 per minute in the duodenum, 9 per minute in the ileum, and a normally low one contraction per 30 minutes in the large intestines that increases 3 to 4 times a day due to a phenomenon called mass movement. The basal electrical rhythm controls the frequency of contraction but additional neuronal and hormonal controls regulate the strength of each contraction.
Physiology
Smooth muscle within the GI tract causes the involuntary peristaltic motion that moves consumed food down the esophagus and towards the rectum. The smooth muscle throughout most of the GI tract is divided into two layers: an outer longitudinal layer and an inner circular layer. Both layers of muscle are located within the muscularis externa. The stomach has a third layer: an innermost oblique layer.
The physical contractions of the smooth muscle cells can be caused by action potentials in efferent motor neurons of the enteric nervous system, or by receptor mediated calcium influx. These efferent motor neurons of the enteric nervous system are cholinergic and adrenergic neurons. The inner circular layer is innervated by both excitatory and inhibitory motor neurons, while the outer longitudinal layer is innervated by mainly excitato
Document 4:::
Gastric pits are indentations in the stomach which denote entrances to 3-5 tubular shaped gastric glands. They are deeper in the pylorus than they are in the other parts of the stomach. The human stomach has several million of these pits which dot the surface of the lining epithelium. Surface mucous cells line the pits themselves but give way to a series of other types of cells which then line the glands themselves.
Gastric acid
Gastric acid also known as gastric juice is secreted from gastric glands, which are located in gastric pits. Gastric juice contains hydrochloric acid, pepsinogen and mucus in a healthy adult. Hydrochloric acid is secreted by parietal cells, pepsinogen is secreted by gastric chief cells and mucus is secreted by mucus neck cells.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In the stomach, which material's arrival triggers churning and the release of gastric juices?
A. food
B. acid
C. hair
D. bile
Answer:
|
|
sciq-1622
|
multiple_choice
|
What is water falling from the sky called?
|
[
"precipitation",
"erosion",
"vaporization",
"distillation"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management.
Definition of evapotranspiration
Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are:
Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed.
Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices.
Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration.
Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions.
Factors that impact evapotranspiration levels
Primary factors
Because evaporation and transpiration
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is water falling from the sky called?
A. precipitation
B. erosion
C. vaporization
D. distillation
Answer:
|
|
sciq-1983
|
multiple_choice
|
The wave on a guitar string is transverse. the sound wave rattles a sheet of paper in a direction that shows the sound wave is what?
|
[
"magnetic",
"lateral",
"longitudinal",
"obtuse"
] |
C
|
Relavent Documents:
Document 0:::
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean
Document 1:::
In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave.
A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation.
Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves.
Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or "
Document 2:::
Longitudinal waves are waves in which the vibration of the medium is parallel to the direction the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal waves are also called compressional or compression waves, because they produce compression and rarefaction when traveling through a medium, and pressure waves, because they produce increases and decreases in pressure. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Real-world examples include sound waves (vibrations in pressure, a particle of displacement, and particle velocity propagated in an elastic medium) and seismic P-waves (created by earthquakes and explosions).
The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse waves, for instance, describe some bulk sound waves in solid materials (but not in fluids); these are also called "shear waves" to differentiate them from the (longitudinal) pressure waves that these materials also support.
Nomenclature
"Longitudinal waves" and "transverse waves" have been abbreviated by some authors as "L-waves" and "T-waves", respectively, for their own convenience. While these two abbreviations have specific meanings in seismology (L-wave for Love wave or long wave) and electrocardiography (see T wave), some authors chose to use "l-waves" (lowercase 'L') and "t-waves" instead, although they are not commonly found in physics writings except for some popular science books.
Sound waves
In the case of longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula
where:
y is the displacement of the point on the traveling sound wave;
x is the distance from the point to the wave's source;
t is the time elapsed;
y0 is the amplitude of the oscillations,
c is the speed of the wave;
Document 3:::
A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves.
Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law.
There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining.
The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances.
Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape.
Uses
The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves g
Document 4:::
This is a list of wave topics.
0–9
21 cm line
A
Abbe prism
Absorption spectroscopy
Absorption spectrum
Absorption wavemeter
Acoustic wave
Acoustic wave equation
Acoustics
Acousto-optic effect
Acousto-optic modulator
Acousto-optics
Airy disc
Airy wave theory
Alfvén wave
Alpha waves
Amphidromic point
Amplitude
Amplitude modulation
Animal echolocation
Antarctic Circumpolar Wave
Antiphase
Aquamarine Power
Arrayed waveguide grating
Artificial wave
Atmospheric diffraction
Atmospheric wave
Atmospheric waveguide
Atom laser
Atomic clock
Atomic mirror
Audience wave
Autowave
Averaged Lagrangian
B
Babinet's principle
Backward wave oscillator
Bandwidth-limited pulse
beat
Berry phase
Bessel beam
Beta wave
Black hole
Blazar
Bloch's theorem
Blueshift
Boussinesq approximation (water waves)
Bow wave
Bragg diffraction
Bragg's law
Breaking wave
Bremsstrahlung, Electromagnetic radiation
Brillouin scattering
Bullet bow shockwave
Burgers' equation
Business cycle
C
Capillary wave
Carrier wave
Cherenkov radiation
Chirp
Ernst Chladni
Circular polarization
Clapotis
Closed waveguide
Cnoidal wave
Coherence (physics)
Coherence length
Coherence time
Cold wave
Collimated light
Collimator
Compton effect
Comparison of analog and digital recording
Computation of radiowave attenuation in the atmosphere
Continuous phase modulation
Continuous wave
Convective heat transfer
Coriolis frequency
Coronal mass ejection
Cosmic microwave background radiation
Coulomb wave function
Cutoff frequency
Cutoff wavelength
Cymatics
D
Damped wave
Decollimation
Delta wave
Dielectric waveguide
Diffraction
Direction finding
Dispersion (optics)
Dispersion (water waves)
Dispersion relation
Dominant wavelength
Doppler effect
Doppler radar
Douglas Sea Scale
Draupner wave
Droplet-shaped wave
Duhamel's principle
E
E-skip
Earthquake
Echo (phenomenon)
Echo sounding
Echolocation (animal)
Echolocation (human)
Eddy (fluid dynamics)
Edge wave
Eikonal equation
Ekman layer
Ekman spiral
Ekman transport
El Niño–Southern Oscillation
El
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The wave on a guitar string is transverse. the sound wave rattles a sheet of paper in a direction that shows the sound wave is what?
A. magnetic
B. lateral
C. longitudinal
D. obtuse
Answer:
|
|
sciq-6890
|
multiple_choice
|
A red blood cell will swell and burst when placed in a?
|
[
"hypotonic solution",
"exothermic solution",
"acidic solution",
"dissolved solution"
] |
A
|
Relavent Documents:
Document 0:::
Reticulocytosis is a condition where there is an increase in reticulocytes, immature red blood cells.
It is commonly seen in anemia. They are seen on blood films when the bone marrow is highly active in an attempt to replace red blood cell loss such as in haemolytic anaemia or haemorrhage.
External links
Histology
Abnormal clinical and laboratory findings for RBCs
Document 1:::
The red pulp of the spleen is composed of connective tissue known also as the cords of Billroth and many splenic sinusoids that are engorged with blood, giving it a red color. Its primary function is to filter the blood of antigens, microorganisms, and defective or worn-out red blood cells.
The spleen is made of red pulp and white pulp, separated by the marginal zone; 76-79% of a normal spleen is red pulp. Unlike white pulp, which mainly contains lymphocytes such as T cells, red pulp is made up of several different types of blood cells, including platelets, granulocytes, red blood cells, and plasma.
The red pulp also acts as a large reservoir for monocytes. These monocytes are found in clusters in the Billroth's cords (red pulp cords). The population of monocytes in this reservoir is greater than the total number of monocytes present in circulation. They can be rapidly mobilised to leave the spleen and assist in tackling ongoing infections.
Sinusoids
The splenic sinusoids, are wide vessels that drain into pulp veins which themselves drain into trabecular veins. Gaps in the endothelium lining the sinusoids mechanically filter blood cells as they enter the spleen. Worn-out or abnormal red cells attempting to squeeze through the narrow intercellular spaces become badly damaged, and are subsequently devoured by macrophages in the red pulp. In addition to clearing aged red blood cells, the sinusoids also filter out cellular debris, particles that could clutter up the bloodstream.
Cells found in red pulp
Red pulp consists of a dense network of fine reticular fiber, continuous with those of the splenic trabeculae, to which are applied flat, branching cells. The meshes of the reticulum are filled with blood:
White blood cells are found to be in larger proportion than they are in ordinary blood.
Large rounded cells, termed splenic cells, are also seen; these are capable of ameboid movement, and often contain pigment and red-blood corpuscles in their interior.
The cell
Document 2:::
A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
Document 3:::
– platelet factor 3
– platelet factor 4
– prothrombin
– thrombin
– thromboplastin
– von willebrand factor
– fibrin
– fibrin fibrinogen degradation products
– fibrin foam
– fibrin tissue adhesive
– fibrinopeptide a
– fibrinopeptide b
– glycophorin
– hemocyanin
– hemoglobins
– carboxyhemoglobin
– erythrocruorins
– fetal hemoglobi
Document 4:::
In hemocytometry, Türk's solution (or Türk's fluid) is a hematological stain (either crystal violet or aqueous methylene blue) prepared in 99% acetic acid (glacial) and distilled water. The solution destroys the
red blood cells and platelets within a blood sample (acetic acid being the main lyzing agent), and stains the nuclei of the white blood cells, making them easier to see and count.
Türk's solution is intended for use in determining total leukocyte count in a defined volume of blood. Erythrocytes are hemolyzed while leukocytes are stained for easy visualization.
Composition of Türk's solution is as follows:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A red blood cell will swell and burst when placed in a?
A. hypotonic solution
B. exothermic solution
C. acidic solution
D. dissolved solution
Answer:
|
|
sciq-3132
|
multiple_choice
|
What is the force of attraction between fundamental particles called quarks, called.
|
[
"weak nuclear force",
"strong nuclear force",
"magnetism",
"gravity"
] |
B
|
Relavent Documents:
Document 0:::
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it.
All four known fundamental interactions are non-contact forces:
Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them.
Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes.
Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions.
Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force.
See also
Tension
Body force
Surface
Document 1:::
Quantum hadrodynamics is an effective field theory pertaining to interactions between hadrons, that is, hadron-hadron interactions or the inter-hadron force. It is "a framework for describing the nuclear many-body problem as a relativistic system of baryons and mesons". Quantum hadrodynamics is closely related and partly derived from quantum chromodynamics, which is the theory of interactions between quarks and gluons that bind them together to form hadrons, via the strong force.
An important phenomenon in quantum hadrodynamics is the nuclear force, or residual strong force. It is the force operating between those hadrons which are nucleons – protons and neutrons – as it binds them together to form the atomic nucleus. The bosons which mediate the nuclear force are three types of mesons: pions, rho mesons and omega mesons. Since mesons are themselves hadrons, quantum hadrodynamics also deals with the interaction between the carriers of the nuclear force itself, alongside the nucleons bound by it. The hadrodynamic force keeps nuclei bound, against the electrodynamic force which operates to break them apart (due to the mutual repulsion between protons in the nucleus).
Quantum hadrodynamics, dealing with the nuclear force and its mediating mesons, can be compared to other quantum field theories which describe fundamental forces and their associated bosons: quantum chromodynamics, dealing with the strong interaction and gluons; quantum electrodynamics, dealing with electromagnetism and photons; quantum flavordynamics, dealing with the weak interaction and W and Z bosons.
See also
Atomic nucleus
Hadron
Nuclear force
Quantum chromodynamics and strong interaction
Quantum electrodynamics and electromagnetism
Quantum flavordynamics and weak interaction
Document 2:::
Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.
The virtual-particle description of static forces is capable of identifying the spatial form of the forces, such as the inverse-square behavior in Newton's law of universal gravitation and in Coulomb's law. It is also able to predict whether the forces are attractive or repulsive for like bodies.
The path integral formulation is the natural language for describing force carriers. This article uses the path integral formulation to describe the force carriers for spin 0, 1, and 2 fields. Pions, photons, and gravitons fall into these respective categories.
There are limits to the validity of the virtual particle picture. The virtual-particle formulation is derived from a method known as perturbation theory which is an approximation assuming interactions are not too strong, and was intended for scattering problems, not bound states such as atoms. For the strong force binding quarks into nucleons at low energies, perturbation theory has never been shown to yield results in accord with experiments, thus, the validity of the "force-mediating particle" picture is questionable. Similarly, for bound states the method fails. In these cases, the physical interpretation must be re-examined. As an example, the calculations of atomic structure in atomic physics or of molecular structure in quantum chemistry could not easily be repeated, if at all, using the "force-mediating particle" picture.
Use of the "force-mediating particle" picture (FMPP) is unnecessary in
Document 3:::
The Smoluchowski factor, also known as von Smoluchowski's f-factor is related to inter-particle interactions. It is named after Marian Smoluchowski.
Document 4:::
In physics, there are four observed fundamental interactions (also known as fundamental forces) that form the basis of all known interactions in nature: gravitational, electromagnetic, strong nuclear, and weak nuclear forces. Some speculative theories have proposed a fifth force to explain various anomalous observations that do not fit existing theories. The characteristics of this fifth force depend on the hypothesis being advanced. Many postulate a force roughly the strength of gravity (i.e., it is much weaker than electromagnetism or the nuclear forces) with a range of anywhere from less than a millimeter to cosmological scales. Another proposal is a new weak force mediated by W′ and Z′ bosons.
The search for a fifth force has increased in recent decades due to two discoveries in cosmology which are not explained by current theories. It has been discovered that most of the mass of the universe is accounted for by an unknown form of matter called dark matter. Most physicists believe that dark matter consists of new, undiscovered subatomic particles, but some believe that it could be related to an unknown fundamental force. Second, it has also recently been discovered that the expansion of the universe is accelerating, which has been attributed to a form of energy called dark energy. Some physicists speculate that a form of dark energy called quintessence could be a fifth force.
Experimental approaches
A new fundamental force might be difficult to test. Gravity, for example, is such a weak force that the gravitational interaction between two objects is only significant when at least one of them has a great mass. Therefore, it takes very sensitive equipment to measure gravitational interactions between objects that are small compared to the Earth. A new (or "fifth") fundamental force might similarly be weak and therefore difficult to detect. Nonetheless, in the late 1980s a fifth force, operating on municipal scales (i.e. with a range of about 100 meters), was rep
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the force of attraction between fundamental particles called quarks, called.
A. weak nuclear force
B. strong nuclear force
C. magnetism
D. gravity
Answer:
|
|
sciq-7853
|
multiple_choice
|
Where is gabbro found?
|
[
"forest floor",
"oceanic crust",
"volcanoes",
"wetlands"
] |
B
|
Relavent Documents:
Document 0:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 3:::
The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site.
The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration.
The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors.
History
The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day.
The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015.
In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work.
Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes.
By the start of 2017, there were more than 600 people working at the site.
In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where is gabbro found?
A. forest floor
B. oceanic crust
C. volcanoes
D. wetlands
Answer:
|
|
sciq-1229
|
multiple_choice
|
Scientists use what scale to illustrate the order in which events on earth have happened?
|
[
"ecological succession",
"fossil record",
"geologic time scale",
"cataclysmic time scale"
] |
C
|
Relavent Documents:
Document 0:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 1:::
Chronology (from Latin chronologia, from Ancient Greek , chrónos, "time"; and , -logia) is the science of arranging events in their order of occurrence in time. Consider, for example, the use of a timeline or sequence of events. It is also "the determination of the actual temporal sequence of past events".
Chronology is a part of periodization. It is also a part of the discipline of history including earth history, the earth sciences, and study of the geologic time scale.
Related fields
Chronology is the science of locating historical events in time. It relies upon chronometry, which is also known as timekeeping, and historiography, which examines the writing of history and the use of historical methods. Radiocarbon dating estimates the age of formerly living things by measuring the proportion of carbon-14 isotope in their carbon content. Dendrochronology estimates the age of trees by correlation of the various growth rings in their wood to known year-by-year reference sequences in the region to reflect year-to-year climatic variation. Dendrochronology is used in turn as a calibration reference for radiocarbon dating curves.
Calendar and era
The familiar terms calendar and era (within the meaning of a coherent system of numbered calendar years) concern two complementary fundamental concepts of chronology. For example, during eight centuries the calendar belonging to the Christian era, which era was taken in use in the 8th century by Bede, was the Julian calendar, but after the year 1582 it was the Gregorian calendar. Dionysius Exiguus (about the year 500) was the founder of that era, which is nowadays the most widespread dating system on earth. An epoch is the date (year usually) when an era begins.
Ab Urbe condita era
Ab Urbe condita is Latin for "from the founding of the City (Rome)", traditionally set in 753 BC. It was used to identify the Roman year by a few Roman historians. Modern historians use it much more frequently than the Romans themselves did; the
Document 2:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 3:::
The mid-24th century BCE climate anomaly is the period, between 2354–2345 BCE, of consistently, reduced annual temperatures that are reconstructed from consecutive abnormally narrow, Irish oak tree rings. These tree rings are indicative of a period of catastrophically reduced growth in Irish trees during that period. This range of dates also matches the transition from the Neolithic to the Bronze Age in the British Isles and a period of widespread societal collapse in the Near East. It has been proposed that this anomalous downturn in the climate might have been the result of comet debris suspended in the atmosphere.
In 1997, Marie-Agnès Courty proposed that a natural disaster involving wildfires, floods, and an air blast of over 100 megatons power occurred about 2350 BCE. This proposal is based on unusual "dust" deposits which have been reported from archaeological sites in Mesopotamia that are a few hundred kilometres from each other. In later papers, Courty subsequently revised the date of this event from 2350 BCE to 2000 BCE.
Based only upon the analysis of satellite imagery, Umm al Binni lake in southern Iraq has been suggested as a possible extraterrestrial impact crater and possible cause of this natural disaster. More recent sources have argued for a formation of the lake through the subsidence of the underlying basement fault blocks. Baillie and McAneney's 2015 discussion of this climate anomaly discusses its abnormally narrow Irish tree rings and the anomalous dust deposits of Courty. However, this paper lacks any mention of Umm al Binni lake.
See also
4.2-kiloyear event, c. 2200 BCE
Great Flood (China), c. 2300 BCE
Document 4:::
The Geologic Calendar is a scale in which the geological timespan of the Earth is mapped onto a calendrical year; that is to say, the day one of the Earth took place on a geologic January 1 at precisely midnight, and today's date and time is December 31 at midnight. On this calendar, the inferred appearance of the first living single-celled organisms, prokaryotes, occurred on a geologic February 25 around 12:30pm to 1:07pm, dinosaurs first appeared on December 13, the first flower plants on December 22 and the first primates on December 28 at about 9:43pm. The first anatomically modern humans did not arrive until around 11:48 p.m. on New Year's Eve, and all of human history since the end of the last ice-age occurred in the last 82.2 seconds before midnight of the new year.
A variation of this analogy instead compresses Earth's 4.6 billion year-old history into a single day: While the Earth still forms at midnight, and the present day is also represented by midnight, the first life on Earth would appear at 4:00am, dinosaurs would appear at 10:00pm, the first flowers 10:30pm, the first primates 11:30pm, and modern humans would not appear until the last two seconds of 11:59pm.
A third analogy, created by University of Washington paleontologist Peter Ward and astronomer Donald Brownlee, who are both famous for their Rare Earth hypothesis, for their book The Life and Death of Planet Earth, alters the calendar so it includes the Earth's future leading up to the Sun's death in the next 5 billion years. As a result, each month now represents 1 of 12 billion years of the Earth's life. According to this calendar, the first life appears in January, and the first animals first appeared in May, with the present day taking place on May 18, even though the Sun won't destroy Earth until December 31, all animals will die out by the end of May.
Use of the geologic calendar as a conceptual aid dates back at least to the mid 20th century, for example in Richard Carrington's 1956
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Scientists use what scale to illustrate the order in which events on earth have happened?
A. ecological succession
B. fossil record
C. geologic time scale
D. cataclysmic time scale
Answer:
|
|
sciq-11108
|
multiple_choice
|
Which is the fourth planet from the sun?
|
[
"earth",
"jupiter",
"mars",
"mars"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of potentially habitable exoplanets. The list is mostly based on estimates of habitability by the Habitable Exoplanets Catalog (HEC), and data from the NASA Exoplanet Archive. The HEC is maintained by the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. There is also a speculative list being developed of superhabitable planets.
Surface planetary habitability is thought to require orbiting at the right distance from the host star for liquid surface water to be present, in addition to various geophysical and geodynamical aspects, atmospheric density, radiation type and intensity, and the host star's plasma environment.
List
This is a list of exoplanets within the circumstellar habitable zone that are under 10 Earth masses and smaller than 2.5 Earth radii, and thus have a chance of being rocky. Note that inclusion on this list does not guarantee habitability, and in particular the larger planets are unlikely to have a rocky composition. Earth is included for comparison.
Note that mass and radius values prefixed with "~" have not been measured, but are estimated from a mass-radius relationship.
Previous candidates
Some exoplanet candidates detected by radial velocity that were originally thought to be potentially habitable were later found to most likely be artifacts of stellar activity. These include Gliese 581 d & g, Gliese 667 Ce & f, Gliese 682 b & c, Kapteyn b, and Gliese 832 c.
HD 85512 b was initially estimated to be potentially habitable, but updated models for the boundaries of the habitable zone placed the planet interior to the HZ, and it is now considered non-habitable. Kepler-69c has gone through a similar process; though initially estimated to be potentially habitable, it was quickly realized that the planet is more likely to be similar to Venus, and is thus no longer considered habitable. Several other planets, such as Gliese 180 b, also appear to be examples of planets once considered potentially habit
Document 1:::
The Somerset Space Walk is a sculpture trail model of the Solar System, located in Somerset, England. The model uses the towpath of the Bridgwater and Taunton Canal to display a model of the Sun and its planets in their proportionally correct sizes and distances apart. Unusually for a Solar System model, there are two sets of planets, so that the diameter of the orbits is represented.
Aware of the inadequacies of printed pictures of the Solar System, the inventor Pip Youngman designed the Space Walk as a way of challenging people's perceptions of space and experiencing the vastness of the Solar System.
The model is built to a scale of 1:530,000,000, meaning that one millimetre on the model equates to 530 kilometres. The Sun is sited at Higher Maunsel Lock, and one set of planets is installed in each direction along the canal towards Taunton and Bridgwater; the distance between the Sun and each model of Pluto being . For less hardy walkers, the inner planets are within of the Sun, and near to the Maunsel Canal Centre (and tea shop) at Lower Maunsel Lock, where a more detailed leaflet about the model is available.
The Space Walk was opened on 9 August 1997 by British astronomer Heather Couper. In 2007, a project team from Somerset County Council refurbished some of the models.
Background
The Walk is a joint venture between the Taunton Solar Model Group and British Waterways, with support from Somerset County Council, Taunton Deane Borough Council and the Somerset Waterways Development Trust. The Taunton Solar Model Group comprised Pip Youngman, Trevor Hill – a local physics teacher who had been awarded the title of "Institute of Physics (IOP) Physics Teacher of the Year" – and David Applegate who, during his time as Mayor of Taunton, had expressed a wish to see some kind of science initiative in the area. Youngman came up with the idea for the Space Walk, and Hill assisted by calculating the respective positions and sizes of the planets.
Funding for the projec
Document 2:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 3:::
This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun.
Star
The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System.
Planets
In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless.
According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System.
Dwarf planets
Dwarf planets are bodies orbiting the Sun that are massive and warm eno
Document 4:::
An Earth analog, also called an Earth analogue, Earth twin, or second Earth, is a planet or moon with environmental conditions similar to those found on Earth. The term Earth-like planet is also used, but this term may refer to any terrestrial planet.
The possibility is of particular interest to astrobiologists and astronomers under reasoning that the more similar a planet is to Earth, the more likely it is to be capable of sustaining complex extraterrestrial life. As such, it has long been speculated and the subject expressed in science, philosophy, science fiction and popular culture. Advocates of space colonization and space and survival have long sought an Earth analog for settlement. In the far future, humans might artificially produce an Earth analog by terraforming.
Before the scientific search for and study of extrasolar planets, the possibility was argued through philosophy and science fiction. Philosophers have suggested that the size of the universe is such that a near-identical planet must exist somewhere. The mediocrity principle suggests that planets like Earth should be common in the Universe, while the Rare Earth hypothesis suggests that they are extremely rare. The thousands of exoplanetary star systems discovered so far are profoundly different from the Solar System, supporting the Rare Earth Hypothesis.
On 4 November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarf stars within the Milky Way Galaxy. The nearest such planet could be expected to be within 12 light-years of the Earth, statistically. In September 2020, astronomers identified 24 superhabitable planets (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.
On 11 January 2023, NASA scientists reported the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which is the fourth planet from the sun?
A. earth
B. jupiter
C. mars
D. mars
Answer:
|
|
sciq-11664
|
multiple_choice
|
How are weather patterns formed?
|
[
"carbon dioxide",
"the moon's gravitational pull",
"pollution from planes",
"uneven heating of the atmosphere"
] |
D
|
Relavent Documents:
Document 0:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 1:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 2:::
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed.
Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.
Types
The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi
Document 3:::
Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to ab
Document 4:::
Ensemble forecasting is a method used in or within numerical weather prediction. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form of Monte Carlo analysis. The multiple simulations are conducted to account for the two usual sources of uncertainty in forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by the chaotic nature of the evolution equations of the atmosphere, which is often referred to as sensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemble spread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of any dynamical system, and not just for weather prediction.
Instances
Today ensemble predictions are commonly made at most of the major operational weather prediction facilities worldwide, including:
National Centers for Environmental Prediction (NCEP of the US)
European Centre for Medium-Range Weather Forecasts (ECMWF)
United Kingdom Met Office
Météo-France
Environment Canada
Japan Meteorological Agency
Bureau of Meteorology (Australia)
China Meteorological Administration (CMA)
Korea Meteorological Administration
CPTEC (Brazil)
Ministry of Earth Sciences (IMD, IITM & NCMRWF) (India)
Experimental ensemble forecasts are made at a number of universities, such as the University of Washington, and ensemble forecasts in the US are also generated by the US Navy and Air Force. There are various ways of viewing the data such as spaghetti plots, ensemble means or Postage Stamps where a number o
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How are weather patterns formed?
A. carbon dioxide
B. the moon's gravitational pull
C. pollution from planes
D. uneven heating of the atmosphere
Answer:
|
|
sciq-7309
|
multiple_choice
|
Mammals can feed at various levels of food chains, as herbivores, insectivores, carnivores and what else?
|
[
"nematodes",
"vegetarians",
"omnivores",
"blood eaters"
] |
C
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 2:::
Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat".
Evolutionary history
The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials).
Evolutionary adaptations
The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as:
mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc.
distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc.
specialized claws and other appendages, for apprehending or killing (including fingers in primates)
changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc.
changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis
Classification
By mode of ingestion
There are many modes of feeding that animals exhibit, including:
Filter feeding: obtaining nutrients from particles suspended in water
Deposit feeding: obtaining nutrients from particles suspended in soil
Fluid feeding: obtaining nutrients by consuming other organisms' fluids
Bulk feeding: obtaining nutrients by eating all of an organism.
Ram feeding and suction feeding: in
Document 3:::
A graminivore is a herbivorous animal that feeds primarily on grass, specifically "true" grasses, plants of the family Poaceae (also known as Graminae). Graminivory is a form of grazing. These herbivorous animals have digestive systems that are adapted to digest large amounts of cellulose, which is abundant in fibrous plant matter and more difficult to break down for many other animals. As such, they have specialized enzymes to aid in digestion and in some cases symbiotic bacteria that live in their digestive track and "assist" with the digestive process through fermentation as the matter travels through the intestines.
Horses, cattle, geese, guinea pigs, hippopotamuses, capybara and giant pandas are examples of vertebrate graminivores. Some carnivorous vertebrates, such as dogs and cats, are known to eat grass occasionally. Grass consumption in dogs can be a way to rid their intestinal tract of parasites that may be threatening to the carnivore's health. Various invertebrates also have graminivorous diets. Many grasshoppers, such as individuals from the family Acrididae, have diets consisting primarily of plants from the family Poaceae. Although humans are not graminivores, we do get much of our nutrition from a type of grass called cereal, and especially from the fruit of that grass which is called grain.
Graminivores generally exhibit a preference on which species of grass they choose to consume. For example, according to a study done on North American bison feeding on shortgrass plains in north-eastern Colorado, the cattle consumed a total of thirty-six different species of plant. Of that thirty-six, five grass species were favoured and consumed the most pervasively. The average consumption of these five species comprised about 80% of their diet. A few of these species include Aristida longiseta, Muhlenbergia species, and Bouteloua gracilis.
Document 4:::
An omnivore () is an animal that has the ability to eat and survive on both plant and animal matter. Obtaining energy and nutrients from plant and animal matter, omnivores digest carbohydrates, protein, fat, and fiber, and metabolize the nutrients and energy of the sources absorbed. Often, they have the ability to incorporate food sources such as algae, fungi, and bacteria into their diet.
Omnivores come from diverse backgrounds that often independently evolved sophisticated consumption capabilities. For instance, dogs evolved from primarily carnivorous organisms (Carnivora) while pigs evolved from primarily herbivorous organisms (Artiodactyla). Despite this, physical characteristics such as tooth morphology may be reliable indicators of diet in mammals, with such morphological adaptation having been observed in bears.
The variety of different animals that are classified as omnivores can be placed into further sub-categories depending on their feeding behaviors. Frugivores include cassowaries, orangutans and grey parrots; insectivores include swallows and pink fairy armadillos; granivores include large ground finches and mice.
All of these animals are omnivores, yet still fall into special niches in terms of feeding behavior and preferred foods. Being omnivores gives these animals more food security in stressful times or makes possible living in less consistent environments.
Etymology and definitions
The word omnivore derives from Latin omnis 'all' and vora, from vorare 'to eat or devour', having been coined by the French and later adopted by the English in the 1800s. Traditionally the definition for omnivory was entirely behavioral by means of simply "including both animal and vegetable tissue in the diet." In more recent times, with the advent of advanced technological capabilities in fields like gastroenterology, biologists have formulated a standardized variation of omnivore used for labeling a species' actual ability to obtain energy and nutrients from ma
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Mammals can feed at various levels of food chains, as herbivores, insectivores, carnivores and what else?
A. nematodes
B. vegetarians
C. omnivores
D. blood eaters
Answer:
|
|
sciq-9128
|
multiple_choice
|
What is the failure of replicated chromosomes during meiosis to separate known as?
|
[
"separation",
"nondisjunction",
"regression",
"pollenation"
] |
B
|
Relavent Documents:
Document 0:::
Chromosome segregation is the process in eukaryotes by which two sister chromatids formed as a consequence of DNA replication, or paired homologous chromosomes, separate from each other and migrate to opposite poles of the nucleus. This segregation process occurs during both mitosis and meiosis. Chromosome segregation also occurs in prokaryotes. However, in contrast to eukaryotic chromosome segregation, replication and segregation are not temporally separated. Instead segregation occurs progressively following replication.
Mitotic chromatid segregation
During mitosis chromosome segregation occurs routinely as a step in cell division (see mitosis diagram). As indicated in the mitosis diagram, mitosis is preceded by a round of DNA replication, so that each chromosome forms two copies called chromatids. These chromatids separate to opposite poles, a process facilitated by a protein complex referred to as cohesin. Upon proper segregation, a complete set of chromatids ends up in each of two nuclei, and when cell division is completed, each DNA copy previously referred to as a chromatid is now called a chromosome.
Meiotic chromosome and chromatid segregation
Chromosome segregation occurs at two separate stages during meiosis called anaphase I and anaphase II (see meiosis diagram). In a diploid cell there are two sets of homologous chromosomes of different parental origin (e.g. a paternal and a maternal set). During the phase of meiosis labeled “interphase s” in the meiosis diagram there is a round of DNA replication, so that each of the chromosomes initially present is now composed of two copies called chromatids. These chromosomes (paired chromatids) then pair with the homologous chromosome (also paired chromatids) present in the same nucleus (see prophase I in the meiosis diagram). The process of alignment of paired homologous chromosomes is called synapsis (see Synapsis). During synapsis, genetic recombination usually occurs. Some of the recombination even
Document 1:::
Nondisjunction is the failure of homologous chromosomes or sister chromatids to separate properly during cell division (mitosis/meiosis). There are three forms of nondisjunction: failure of a pair of homologous chromosomes to separate in meiosis I, failure of sister chromatids to separate during meiosis II, and failure of sister chromatids to separate during mitosis. Nondisjunction results in daughter cells with abnormal chromosome numbers (aneuploidy).
Calvin Bridges and Thomas Hunt Morgan are credited with discovering nondisjunction in Drosophila melanogaster sex chromosomes in the spring of 1910, while working in the Zoological Laboratory of Columbia University.
Types
In general, nondisjunction can occur in any form of cell division that involves ordered distribution of chromosomal material. Higher animals have three distinct forms of such cell divisions: Meiosis I and meiosis II are specialized forms of cell division occurring during generation of gametes (eggs and sperm) for sexual reproduction, mitosis is the form of cell division used by all other cells of the body.
Meiosis II
Ovulated eggs become arrested in metaphase II until fertilization triggers the second meiotic division. Similar to the segregation events of mitosis, the pairs of sister chromatids resulting from the separation of bivalents in meiosis I are further separated in anaphase of meiosis II. In oocytes, one sister chromatid is segregated into the second polar body, while the other stays inside the egg. During spermatogenesis, each meiotic division is symmetric such that each primary spermatocyte gives rise to 2 secondary spermatocytes after meiosis I, and eventually 4 spermatids after meiosis II. Meiosis II-nondisjunction may also result in aneuploidy syndromes, but only to a much smaller extent than do segregation failures in meiosis I.
Mitosis
Division of somatic cells through mitosis is preceded by replication of the genetic material in S phase. As a result, each chromosome consists
Document 2:::
A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932.
Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome.
Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis.
Structure of Kinetochore
The kinetochore contains two regions:
an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists t
Document 3:::
Interkinesis or interphase II is a period of rest that cells of some species enter during meiosis between meiosis I and meiosis II. No DNA replication occurs during interkinesis; however, replication does occur during the interphase I stage of meiosis (See meiosis I). During interkinesis, the spindles of the first meiotic division disassembles and the microtubules reassemble into two new spindles for the second meiotic division. Interkinesis follows telophase I; however, many plants skip telophase I and interkinesis, going immediately into prophase II. Each chromosome still consists of two chromatids. In this stage other organelle number may also increase.
Document 4:::
Meiotic drive is a type of intragenomic conflict, whereby one or more loci within a genome will affect a manipulation of the meiotic process in such a way as to favor the transmission of one or more alleles over another, regardless of its phenotypic expression. More simply, meiotic drive is when one copy of a gene is passed on to offspring more than the expected 50% of the time. According to Buckler et al., "Meiotic drive is the subversion of meiosis so that particular genes are preferentially transmitted to the progeny. Meiotic drive generally causes the preferential segregation of small regions of the genome".
Meiotic drive in plants
The first report of meiotic drive came from Marcus Rhoades who in 1942 observed a violation of Mendelian segregation ratios for the R locus - a gene controlling the production of the purple pigment anthocyanin in maize kernels - in a maize line carrying abnormal chromosome 10 (Ab10). Ab10 differs from the normal chromosome 10 by the presence of a 150-base pair heterochromatic region called 'knob', which functions as a centromere during division (hence called 'neocentromere') and moves to the spindle poles faster than the centromeres during meiosis I and II. The mechanism for this was later found to involve the activity of a kinesin-14 gene called Kinesin driver (Kindr). Kindr protein is a functional minus-end directed motor, displaying quicker minus-end directed motility than an endogenous kinesin-14, such as Kin11. As a result Kindr outperforms the endogenous kinesins, pulling the 150 bp knobs to the poles faster than the centromeres and causing Ab10 to be preferentially inherited during meiosis
Meiotic drive in animals
The unequal inheritance of gametes has been observed since the 1950s, in contrast to Gregor Mendel's First and Second Laws (the law of segregation and the law of independent assortment), which dictate that there is a random chance of each allele being passed on to offspring. Examples of selfish drive genes in ani
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the failure of replicated chromosomes during meiosis to separate known as?
A. separation
B. nondisjunction
C. regression
D. pollenation
Answer:
|
|
sciq-9756
|
multiple_choice
|
Unstable isotopes give off particles and energy as what?
|
[
"radiation",
"electricity",
"ultraviolet light",
"radioactivity"
] |
D
|
Relavent Documents:
Document 0:::
Ionizing radiation (or ionising radiation), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum.
Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, nearly all types of laser light, infrared, microwaves, and radio waves are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area cannot be sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV.
Typical ionizing subatomic particles include alpha particles, beta particles, and neutrons. These are typically created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission.
Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome).
Ionizing radiation is used in a wide variety of field
Document 1:::
HZE ions are the high-energy nuclei component of galactic cosmic rays (GCRs) which have an electric charge of +3 or greater – that is, they must be the nuclei of elements heavier than hydrogen or helium.
The abbreviation "HZE" comes from high (H) atomic number (Z) and energy (E). HZE ions include the nuclei of all elements heavier than hydrogen (which has a +1 charge) and helium (which has a +2 charge). Each HZE ion consists of a nucleus with no orbiting electrons, meaning that the charge on the ion is the same as the atomic number of the nucleus. Their source is not certain, but is thought likely to be supernova explosions.
Composition and abundance
HZE ions are rare compared to protons, for example, composing only 1% of GCRs versus 85% for protons. HZE ions, like other GCRs, travel near the speed of light.
In addition to the HZE ions from cosmic sources, HZE ions are produced by the Sun. During solar flares and other solar storms, HZE ions are sometimes produced in small amounts, along with the more typical protons, but their energy level is substantially smaller than HZE ions from cosmic rays.
Space radiation is composed mostly of high-energy protons, helium nuclei, and high-Z high-energy ions (HZE ions). The ionization patterns in molecules, cells, tissues, and the resulting biological harm are distinct from high-energy photon radiation: X-rays and gamma rays, which produce low-linear energy transfer (low-LET) radiation from secondary electrons.
While in space, astronauts are exposed to protons, helium nuclei, and HZE ions, as well as secondary radiation from nuclear reactions from spacecraft parts or tissue.
{|
|+ Prominent HZE ions
|-
| Carbon || C
|-
| Oxygen || O
|-
| Magnesium || Mg
|-
| Silicon || Si
|-
| Iron || Fe
|}
GCRs typically originate from outside the Solar System and within the Milky Way galaxy, but those from outside of the Milky Way consist mostly of highly energetic protons with a small component of HZE ions. GCR energy spectra pea
Document 2:::
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide.
Radiation interactions with matter
As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system.
Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation.
An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usuall
Document 3:::
Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions. This is very different from radiation chemistry where the radiation levels are kept too low to influence the chemistry.
Radiochemistry includes the study of both natural and man-made radioisotopes.
Main decay modes
All radioisotopes are unstable isotopes of elements— that undergo nuclear decay and emit some form of radiation. The radiation emitted can be of several types including alpha, beta, gamma radiation, proton, and neutron emission along with neutrino and antiparticle emission decay pathways.
1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and the atomic number will decrease by 2.
2. β (beta) radiation—the transmutation of a neutron into an electron and a proton. After this happens, the electron is emitted from the nucleus into the electron cloud.
3. γ (gamma) radiation—the emission of electromagnetic energy (such as gamma rays) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay.
These three types of radiation can be distinguished by their difference in penetrating power.
Alpha can be stopped quite easily by a few centimetres of air or a piece of paper and is equivalent to a helium nucleus. Beta can be cut off by an aluminium sheet just a few millimetres thick and are electrons. Gamma is the most penetrating of the three and is a massless chargeless high-energy photon. Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or
Document 4:::
The isotopic resonance hypothesis (IsoRes) postulates that certain isotopic compositions of chemical elements affect kinetics of chemical reactions involving molecules built of these elements. The isotopic compositions for which this effect is predicted are called resonance isotopic compositions.
Fundamentally, the IsoRes hypothesis relies on a postulate that less complex systems exhibit faster kinetics than equivalent but more complex systems. Furthermore, system's complexity is affected by its symmetry (more symmetric systems are simpler), and symmetry (in general meaning) of reactants may be affected by their isotopic composition.
The term “resonance” relates to the use of this term in nuclear physics, where peaks in the dependence of a reaction cross section upon energy are called “resonances”. Similarly, a sharp increase (or decrease) in the reaction kinetics as a function of the average isotopic mass of a certain element is called here a resonance.
History of formulation
The concept of isotopes developed from radioactivity. The pioneering work on radioactivity by Henri Becquerel, Marie Curie and Pierre Curie was awarded the Nobel Prize in Physics in 1903. Later Frederick Soddy would take radioactivity from physics to chemistry and shed light on the nature of isotopes, something with rendered him the Nobel Prize in Chemistry in 1921 (awarded in 1922).
The question of stable, non-radioactive isotopes was more difficult and required the development by Francis Aston of a high-resolution mass spectrograph, which allowed the separation of different stable isotopes of one and the same element. Francis Aston was awarded the 1922 Nobel Prize in Chemistry for this achievement. With his enunciation of the whole-number rule, Aston solved a problem that had riddled chemistry for a hundred years. The understanding was that different isotopes of a given element would be chemically identical.
It was discovered in the 1930s by Harold Urey in 1932 (awarded the Nobel Pri
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Unstable isotopes give off particles and energy as what?
A. radiation
B. electricity
C. ultraviolet light
D. radioactivity
Answer:
|
|
sciq-5547
|
multiple_choice
|
Where do skeletal muscles usually attach?
|
[
"end of bones",
"to cartilage",
"to dendrites",
"to the spine"
] |
A
|
Relavent Documents:
Document 0:::
Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance.
See also
Myotomy
Oral myology
Document 1:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 2:::
The rectus femoris muscle is one of the four quadriceps muscles of the human body. The others are the vastus medialis, the vastus intermedius (deep to the rectus femoris), and the vastus lateralis. All four parts of the quadriceps muscle attach to the patella (knee cap) by the quadriceps tendon.
The rectus femoris is situated in the middle of the front of the thigh; it is fusiform in shape, and its superficial fibers are arranged in a bipenniform manner, the deep fibers running straight () down to the deep aponeurosis. Its functions are to flex the thigh at the hip joint and to extend the leg at the knee joint.
Structure
It arises by two tendons: one, the anterior or straight, from the anterior inferior iliac spine; the other, the posterior or reflected, from a groove above the rim of the acetabulum.
The two unite at an acute angle and spread into an aponeurosis that is prolonged downward on the anterior surface of the muscle, and from this the muscular fibers arise.
The muscle ends in a broad and thick aponeurosis that occupies the lower two-thirds of its posterior surface, and, gradually becoming narrowed into a flattened tendon, is inserted into the base of the patella.
Nerve supply
The neurons for voluntary thigh contraction originate near the summit of the medial side of the precentral gyrus (the primary motor area of the brain). These neurons send a nerve signal that is carried by the corticospinal tract down the brainstem and spinal cord. The signal starts with the upper motor neurons carrying the signal from the precentral gyrus down through the internal capsule, through the cerebral peduncle, and into the medulla. In the medullary pyramid, the corticospinal tract decussates and becomes the lateral corticospinal tract. The nerve signal will continue down the lateral corticospinal tract until it reaches spinal nerve L4. At this point, the nerve signal will synapse from the upper motor neurons to the lower motor neurons. The signal will travel through the
Document 3:::
The lumbar trunks are formed by the union of the efferent vessels from the lateral aortic lymph nodes.
They receive the lymph from the lower limbs, from the walls and viscera of the pelvis, from the kidneys and suprarenal glands and the deep lymphatics of the greater part of the abdominal wall.
Ultimately, the lumbar trunks empty into the cisterna chyli, a dilatation at the beginning of the thoracic duct.
Document 4:::
The list below describes such skeletal movements as normally are possible in particular joints of the human body. Other animals have different degrees of movement at their respective joints; this is because of differences in positions of muscles and because structures peculiar to the bodies of humans and other species block motions unsuited to their anatomies.
Arm and shoulder
Shoulder
elbow
The major muscles involved in retraction include the rhomboid major muscle, rhomboid minor muscle and trapezius muscle, whereas the major muscles involved in protraction include the serratus anterior and pectoralis minor muscles.
Sternoclavicular and acromioclavicular joints
Elbow
Wrist and fingers
Movements of the fingers
Movements of the thumb
Neck
Spine
Lower limb
Knees
Feet
The muscles tibialis anterior and tibialis posterior invert the foot. Some sources also state that the triceps surae and extensor hallucis longus invert. Inversion occurs at the subtalar joint and transverse tarsal joint.
Eversion of the foot occurs at the subtalar joint. The muscles involved in this include Fibularis longus and fibularis brevis, which are innervated by the superficial fibular nerve. Some sources also state that the fibularis tertius everts.
Dorsiflexion of the foot: The muscles involved include those of the Anterior compartment of leg, specifically tibialis anterior muscle, extensor hallucis longus muscle, extensor digitorum longus muscle, and peroneus tertius. The range of motion for dorsiflexion indicated in the literature varies from 12.2 to 18 degrees. Foot drop is a condition, that occurs when dorsiflexion is difficult for an individual who is walking.
Plantarflexion of the foot: Primary muscles for plantar flexion are situated in the Posterior compartment of leg, namely the superficial Gastrocnemius, Soleus and Plantaris (only weak participation), and the deep muscles Flexor hallucis longus, Flexor digitorum longus and Tibialis posterior. Muscles in the Lateral co
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do skeletal muscles usually attach?
A. end of bones
B. to cartilage
C. to dendrites
D. to the spine
Answer:
|
|
sciq-11519
|
multiple_choice
|
What process refers to the changes that occur in populations of living organisms over time?
|
[
"adaptation",
"variation",
"evolution",
"spontaneous mutation"
] |
C
|
Relavent Documents:
Document 0:::
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce
Document 1:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 2:::
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples.
Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Interaction between organisms. the processes
Document 3:::
Evolution & Development is a peer-reviewed scientific journal publishing material at the interface of evolutionary and developmental biology. Within evolutionary developmental biology, it has the aim of aiding a broader synthesis of biological thought in these two areas. Its scope ranges from paleontology and population biology, to developmental and molecular biology, including mathematics and the history and philosophy of science.
It was established in 1999 by five biologists: Wallace Arthur, Sean B. Carroll, Michael Coates, Rudolf Raff, and Gregory Wray. It is published by Wiley-Blackwell on behalf of the Society for Integrative and Comparative Biology.
Document 4:::
Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
Overview
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What process refers to the changes that occur in populations of living organisms over time?
A. adaptation
B. variation
C. evolution
D. spontaneous mutation
Answer:
|
|
sciq-11615
|
multiple_choice
|
What type of cartilage contains no collagen?
|
[
"joint cartilage",
"shark cartilage",
"lamprey cartilage",
"fetal cartilage"
] |
C
|
Relavent Documents:
Document 0:::
Chondrin is a bluish-white gelatin-like substance, being a protein-carbohydrate complex and can be obtained by boiling cartilage in water.
The cartilage is a connective tissue that contains cells embedded in a matrix of chondrin. Chondrin is made up of two proteins chondroalbunoid and chondromucoid.
See also
Chondroitin
External links
Charles Darwin - Insectivorous Plants Page 56
Animal products
Edible thickening agents
Proteins
Document 1:::
The territorial matrix is the tissue surrounding chondrocytes (cells which produce cartilage) in cartilage. Chondrocytes are inactive cartilage cells, so they don't make cartilage components. The territorial matrix is basophilic (attracts basic compounds and dyes due to its anionic/acidic nature), because there is a higher concentration of proteoglycans, so it will color darker when it's colored and viewed under a microscope. In other words, it stains metachromatically (dyes change color upon binding) due to the presence of proteoglycans (compound molecules composed of proteins and sugars).
Document 2:::
Outline
h1.00: Cytology
h2.00: General histology
H2.00.01.0.00001: Stem cells
H2.00.02.0.00001: Epithelial tissue
H2.00.02.0.01001: Epithelial cell
H2.00.02.0.02001: Surface epithelium
H2.00.02.0.03001: Glandular epithelium
H2.00.03.0.00001: Connective and supportive tissues
H2.00.03.0.01001: Connective tissue cells
H2.00.03.0.02001: Extracellular matrix
H2.00.03.0.03001: Fibres of connective tissues
H2.00.03.1.00001: Connective tissue proper
H2.00.03.1.01001: Ligaments
H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue
H2.00.03.3.00001: Reticular tissue
H2.00.03.4.00001: Adipose tissue
H2.00.03.5.00001: Cartilage tissue
H2.00.03.6.00001: Chondroid tissue
H2.00.03.7.00001: Bone tissue; Osseous tissue
H2.00.04.0.00001: Haemotolymphoid complex
H2.00.04.1.00001: Blood cells
H2.00.04.1.01001: Erythrocyte; Red blood cell
H2.00.04.1.02001: Leucocyte; White blood cell
H2.00.04.1.03001: Platelet; Thrombocyte
H2.00.04.2.00001: Plasma
H2.00.04.3.00001: Blood cell production
H2.00.04.4.00001: Postnatal sites of haematopoiesis
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
Document 3:::
Dense irregular connective tissue has fibers that are not arranged in parallel bundles as in dense regular connective tissue.
Dense irregular connective tissue consists of mostly collagen fibers. It has less ground substance than loose connective tissue. Fibroblasts are the predominant cell type, scattered sparsely across the tissue.
Function
This type of connective tissue is found mostly in the reticular layer (or deep layer) of the dermis. It is also in the sclera and in the deeper skin layers. Due to high portions of collagenous fibers, dense irregular connective tissue provides strength, making the skin resistant to tearing by stretching forces from different directions.
Dense irregular connective tissue also makes up submucosa of the digestive tract, lymph nodes, and some types of fascia. Other examples include periosteum and perichondrium of bones, and the tunica albuginea of testis. In the submucosa layer, the fiber bundles course in varying planes allowing the organ to resist excessive stretching and distension.
Document 4:::
Elastic cartilage, fibroelastic cartilage or yellow fibrocartilage is a type of cartilage present in the pinnae (auricles) of the ear giving it shape, provides shape for the lateral region of the external auditory meatus, medial part of the auditory canal Eustachian tube, corniculate and cuneiform laryneal cartilages, and the epiglottis. It contains elastic fiber networks and collagen type II fibers. The principal protein is elastin.
Structure
Elastic cartilage is histologically similar to hyaline cartilage but contains many yellow elastic fibers lying in a solid matrix. These fibers form bundles that appear dark under a microscope. The elastic fibers require special staining since when it is stained using haematoxylin and eosin (H&E) stain it appears the same as hyaline cartilage. Verhoeff van Geison stains are used (giving the elastic fibers a black color), but aldehyde fuchsin stains, Weigert's elastic stains, and orcein stains also work. These fibers give elastic cartilage great flexibility so that it is able to withstand repeated bending. Similarly to hyaline one or multiple chondrocytes lie between the spaces (or lacunea) in the fibres. The chondrocytes only make up 2% of the tissue's volume. Chondrocytes and the extracellular matrix are contained in an outerlayer named the perichondrium (which is a layer of dense irregular connective tissue that surrounds cartilage which is independent of the joint). It is found in the epiglottis (part of the larynx), and the pinnae (the external ear flaps of many mammals). Elastin fibers stain dark purple/black with Verhoeff's stain.
The extracellular matrix contains Elastin, fibrillin, glycoproteins, collagen types II, IX, X, and XI, and the proteoglycan aggrecan. the components within the extracellular matrix are produced by the chondroblasts located within the edges of the perichondrium.
Elastic fibers within the extracellular matrix are made up of elastin proteins which co-polymerize with fibrillin forming fiber-li
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of cartilage contains no collagen?
A. joint cartilage
B. shark cartilage
C. lamprey cartilage
D. fetal cartilage
Answer:
|
|
ai2_arc-1097
|
multiple_choice
|
A teacher opens a can of food in the front of a classroom. Soon, all of the students in the classroom can smell the food. Which statement identifies a property of a gas that allows all of the students to smell the food?
|
[
"A gas has no mass.",
"A gas has a large mass.",
"A gas takes the shape of its container.",
"A gas keeps its shape when placed in a container."
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
The volatilome (sometimes termed volatolome or volatome) contains all of the volatile metabolites as well as other volatile organic and inorganic compounds that originate from an organism, super-organism, or ecosystem. The atmosphere of a living planet could be regarded as its volatilome. While all volatile metabolites in the volatilome can be thought of as a subset of the metabolome, the volatilome also contains exogenously derived compounds that do not derive from metabolic processes (e.g. environmental contaminants), therefore the volatilome can be regarded as a distinct entity from the metabolome. The volatilome is a component of the 'aura' of molecules and microbes (the 'microbial cloud') that surrounds all organisms.
Odor profile
All volatile metabolites detectable by the human nose are termed an 'odour profile'. The association of altered odour profiles with disease states has long been documented in both eastern and western medicine, and recent advances in robotic sample introduction have increased interest in the volatilome as a source for biomarkers that can be used for non-invasive screening for disease. Volatile profiles can be collected via active or passive sampling and analysis is predominantly undertaken using gas chromatography–mass spectrometry, with a variety of direct or indirect sample introduction techniques.
See also
Electronic nose
Document 4:::
Machine olfaction is the automated simulation of the sense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called an electronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereas odors are produced by unique sets of (potentially numerous) odorant compounds. The technology, though still in the early stages of development, promises many applications, such as:
quality control in food processing, detection and diagnosis in medicine, detection of drugs, explosives and other dangerous or illegal substances, disaster response, and environmental monitoring.
One type of proposed machine olfaction technology is via gas sensor array instruments capable of detecting, identifying, and measuring volatile compounds. However, a critical element in the development of these instruments is pattern analysis, and the successful design of a pattern analysis system for machine olfaction requires a careful consideration of the various issues involved in processing multivariate data: signal-preprocessing, feature extraction, feature selection, classification, regression, clustering, and validation. Another challenge in current research on machine olfaction is the need to predict or estimate the sensor response to aroma mixtures. Some pattern recognition problems in machine olfaction such as odor classification and odor localization can be solved by using time series kernel methods.
Detection
There are three basic detection techniques using conductive-polymer odor sensors (polypyrrole), tin-oxide gas sensors, and quartz-crystal micro-balance sensors. They generally comprise (1) an array of sensors of some type, (2) the electronics to interrogate those sensors and produce digital signals, and (3) data processing and user interface software.
The entire s
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A teacher opens a can of food in the front of a classroom. Soon, all of the students in the classroom can smell the food. Which statement identifies a property of a gas that allows all of the students to smell the food?
A. A gas has no mass.
B. A gas has a large mass.
C. A gas takes the shape of its container.
D. A gas keeps its shape when placed in a container.
Answer:
|
|
sciq-4541
|
multiple_choice
|
What organs filter blood and form urine?
|
[
"the spleen",
"the appendix",
"the kidneys",
"the liver"
] |
C
|
Relavent Documents:
Document 0:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 1:::
The excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles) most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating.
Only the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function.
As most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure.
Systems
Urinary system
The kidneys are large, bean-shaped organs which are present on each side of the vertebral column in the abdominal cavity. Humans have two kidneys and each kidney is supplied with blood from the renal artery. The kidneys remove from the blood the nitrogenous wastes such as urea, as well as salts and excess water, and excrete them in the form of urine. This is done with the help of millions of nephrons present in the kidney. The filtrated blood is carried away from the kidneys by the renal vein (or kidney vein). The urine from the kidney is collected by the ureter (or excretory tubes), one from each kidney, and is passed to the urinary bladder. The urinary bladder collects and stores the urine until urination. The urine collected in the bladder is passed into the external environment from the body through an opening called
Document 2:::
Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra.
Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body.
Urine plays an important role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles.
Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high.
Physiology
Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration. The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body.
Duration
Research looking at the duration
Document 3:::
Renal pathology is a subspecialty of anatomic pathology that deals with the diagnosis and characterization of medical diseases (non-tumor) of the kidneys. In the academic setting, renal pathologists work closely with nephrologists and transplant surgeons, who typically obtain diagnostic specimens via percutaneous renal biopsy. The renal pathologist must synthesize findings from light microscopy, electron microscopy, and immunofluorescence to obtain a definitive diagnosis. Medical renal diseases may affect the glomerulus, the tubules and interstitium, the vessels, or a combination of these compartments.
External links
http://www.renalpathsoc.org/
Renal Pathology Tutorial written by J. Charles Jennette
Pathologist Guide
Anatomical pathology
Document 4:::
Drinking is the act of ingesting water or other liquids into the body through the mouth, proboscis, or elsewhere. Humans drink by swallowing, completed by peristalsis in the esophagus. The physiological processes of drinking vary widely among other animals.
Most animals drink water to maintain bodily hydration, although many can survive on the water gained from their food. Water is required for many physiological processes. Both inadequate and (less commonly) excessive water intake are associated with health problems.
Methods of drinking
In humans
When a liquid enters a human mouth, the swallowing process is completed by peristalsis which delivers the liquid through the esophagus to the stomach; much of the activity is abetted by gravity. The liquid may be poured from the hands or drinkware may be used as vessels. Drinking can also be performed by acts of inhalation, typically when imbibing hot liquids or drinking from a spoon. Infants employ a method of suction wherein the lips are pressed tight around a source, as in breastfeeding: a combination of breath and tongue movement creates a vacuum which draws in liquid.
In other land mammals
By necessity, terrestrial animals in captivity become accustomed to drinking water, but most free-roaming animals stay hydrated through the fluids and moisture in fresh food, and learn to actively seek foods with high fluid content. When conditions impel them to drink from bodies of water, the methods and motions differ greatly among species.
Cats, canines, and ruminants all lower the neck and lap in water with their powerful tongues. Cats and canines lap up water with the tongue in a spoon-like shape. Canines lap water by scooping it into their mouth with a tongue which has taken the shape of a ladle. However, with cats, only the tip of their tongue (which is smooth) touches the water, and then the cat quickly pulls its tongue back into its mouth which soon closes; this results in a column of liquid being pulled into the ca
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What organs filter blood and form urine?
A. the spleen
B. the appendix
C. the kidneys
D. the liver
Answer:
|
|
sciq-5251
|
multiple_choice
|
A type of what organism causes ergot, a disease that impacts crops directly and has more devastating effects on animals?
|
[
"bacteria",
"fungus",
"insects",
"virus"
] |
B
|
Relavent Documents:
Document 0:::
Plant disease forecasting is a management system used to predict the occurrence or change in severity of plant diseases. At the field scale, these systems are used by growers to make economic decisions about disease treatments for control. Often the systems ask the grower a series of questions about the susceptibility of the host crop, and incorporate current and forecast weather conditions to make a recommendation. Typically a recommendation is made about whether disease treatment is necessary or not. Usually treatment is a pesticide application.
Forecasting systems are based on assumptions about the pathogen's interactions with the host and environment, the disease triangle. The objective is to accurately predict when the three factors – host, environment, and pathogen – all interact in such a fashion that disease can occur and cause economic losses.
In most cases the host can be suitably defined as resistant or susceptible, and the presence of the pathogen may often be reasonably ascertained based on previous cropping history or perhaps survey data. The environment is usually the factor that controls whether disease develops or not. Environmental conditions may determine the presence of the pathogen in a particular season through their effects on processes such as overwintering. Environmental conditions also affect the ability of the pathogen to cause disease, e.g. a minimum leaf wetness duration is required for grey leaf spot of corn to occur. In these cases a disease forecasting system attempts to define when the environment will be conducive to disease development.
Good disease forecasting systems must be reliable, simple, cost-effective and applicable to many diseases. As such they are normally only designed for diseases that are irregular enough to warrant a prediction system, rather than diseases that occur every year for which regular treatment should be employed. Forecasting systems can only be designed if there is also an understanding on the
Document 1:::
In agriculture, disease management is the practice of minimising disease in crops to increase quantity or quality of harvest yield.
Organisms that cause infectious disease in crops include fungi, oomycetes, bacteria, viruses, viroids, virus-like organisms, phytoplasmas, protozoa, nematodes and parasitic plants. Crops can also suffer from ectoparasites including insects, mites, snails, slugs, and vertebrate animals, but these are not considered diseases.
Controlling diseases can be achieved by resistance genes, fungicides, nematicides, quarantine, etc. Disease management can be a large part of farm operating costs.
See also
Corn smut
Great Irish Famine
blight
Document 2:::
Plant disease epidemiology is the study of disease in plant populations. Much like diseases of humans and other animals, plant diseases occur due to pathogens such as bacteria, viruses, fungi, oomycetes, nematodes, phytoplasmas, protozoa, and parasitic plants. Plant disease epidemiologists strive for an understanding of the cause and effects of disease and develop strategies to intervene in situations where crop losses may occur. Destructive and non-destructive methods are used to detect diseases in plants. Additionally, understanding the responses of the immune system in plants will further benefit and limit the loss of crops. Typically successful intervention will lead to a low enough level of disease to be acceptable, depending upon the value of the crop.
Plant disease epidemiology is often looked at from a multi-disciplinary approach, requiring biological, statistical, agronomic and ecological perspectives. Biology is necessary for understanding the pathogen and its life cycle. It is also necessary for understanding the physiology of the crop and how the pathogen is adversely affecting it. Agronomic practices often influence disease incidence for better or for worse. Ecological influences are numerous. Native species of plants may serve as reservoirs for pathogens that cause disease in crops. Statistical models are often applied in order to summarize and describe the complexity of plant disease epidemiology, so that disease processes can be more readily understood. For example, comparisons between patterns of disease progress for different diseases, cultivars, management strategies, or environmental settings can help in determining how plant diseases may best be managed. Policy can be influential in the occurrence of diseases, through actions such as restrictions on imports from sources where a disease occurs.
In 1963 J. E. van der Plank published "Plant Diseases: Epidemics and Control", a seminal work that created a theoretical framework for the study of
Document 3:::
A pathosystem is a subsystem of an ecosystem and is defined by the phenomenon of parasitism. A plant pathosystem is one in which the host species is a plant. The parasite is any species in which the individual spends a significant part of its lifespan inhabiting one host individual and obtaining nutrients from it. The parasite may thus be an insect, mite, nematode, parasitic Angiosperm, fungus, bacterium, mycoplasma, virus or viroid. Other consumers, however, such as mammalian and avian herbivores, which graze populations of plants, are normally considered to be outside the conceptual boundaries of the plant pathosystem.
A host has the property of resistance to a parasite. And a parasite has the property of parasitic ability on a host. Parasitism is the interaction of these two properties. The main feature of the pathosystem concept is that it concerns parasitism, and it is not concerned with the study of either the host or parasite on its own. Another feature of the pathosystem concept is that the parasitism is studied in terms of populations, at the higher levels and in ecologic aspects of the system. The pathosystem concept is also multidisciplinary. It brings together various crop science disciplines such as entomology, nematology, plant pathology, and plant breeding. It also applies to wild populations and to agricultural, horticultural, and forest crops, and to tropical, subtropical, as well as both subsistence and commercial farming.
In a wild plant pathosystem, both the host and the parasite populations exhibit genetic diversity and genetic flexibility. Conversely, in a crop pathosystem, the host population normally exhibits genetic uniformity and genetic inflexibility (i.e., clones, pure lines, hybrid varieties), and the parasite population assumes a comparable uniformity. This distinction means that a wild pathosystem can respond to selection pressures, but that a crop pathosystem does not. It also means that a system of locking (see below) can function
Document 4:::
Downy mildew refers to any of several types of oomycete microbes that are obligate parasites of plants. Downy mildews exclusively belong to the Peronosporaceae family. In commercial agriculture, they are a particular problem for growers of crucifers, grapes and vegetables that grow on vines. The prime example is Peronospora farinosa featured in NCBI-Taxonomy and HYP3. This pathogen does not produce survival structures in the northern states of the United States, and overwinters as live mildew colonies in Gulf Coast states. It progresses northward with cucurbit production each spring. Yield loss associated with downy mildew is most likely related to soft rots that occur after plant canopies collapse and sunburn occurs on fruit. Cucurbit downy mildew only affects leaves of cucurbit plants.
Symptoms
Initial symptoms include large, angular or blocky, yellow areas visible on the upper surface. They can also be distinguished by their sporadic yellow patch appearance. As lesions mature, they expand rapidly and turn brown. The under surface of infected leaves appears watersoaked. Upon closer inspection, a purple-brown mold (see arrow) becomes apparent. Small spores shaped like footballs can be observed among the mold with a 10x hand lens. As a result of numerous infectious sites, leaves might show a blighted appearance if the disease continues to spread. In disease-favorable conditions (cool nights with long dew periods), downy mildew will spread rapidly, destroying leaf tissue without affecting stems or petioles.
Treatment and management
Cultural options
Because the downy mildew pathogen does not overwinter in midwestern fields, crop rotations and tillage practices do not affect disease development. The pathogen tends to become established in late summer. Therefore, planting early season varieties may further reduce the already minor threat posed by downy mildew. When downy mildew does pose a threat, the removal and destruction of plants displaying symptoms is good pr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A type of what organism causes ergot, a disease that impacts crops directly and has more devastating effects on animals?
A. bacteria
B. fungus
C. insects
D. virus
Answer:
|
|
sciq-2969
|
multiple_choice
|
Represented in equations by the letter "g", what pulls objects down to the earth's surface?
|
[
"energy",
"light",
"motion",
"gravity"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation).
It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm .
In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ).
The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value).
The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects.
Variation in magnitude
A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid.
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Represented in equations by the letter "g", what pulls objects down to the earth's surface?
A. energy
B. light
C. motion
D. gravity
Answer:
|
|
sciq-9397
|
multiple_choice
|
In ovoviviparous fish like shark, what develops inside the mother’s body but without nourishment from the mother?
|
[
"genes",
"spores",
"eggs",
"molecules"
] |
C
|
Relavent Documents:
Document 0:::
A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts.
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship.
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational
Document 1:::
Histotrophy is a form of matrotrophy exhibited by some live-bearing sharks and rays, in which the developing embryo receives additional nutrition from its mother in the form of uterine secretions, known as histotroph (or "uterine milk"). It is one of the three major modes of elasmobranch reproduction encompassed by "aplacental viviparity", and can be contrasted with yolk-sac viviparity (in which the embryo is solely sustained by yolk) and oophagy (in which the embryo feeds on ova).
There are two categories of histotrophy:
In mucoid or limited histotrophy, the developing embryo ingests uterine mucus or histotroph as a supplement to the energy supplies provided by its yolk sac. This form of histotrophy is known to occur in the dogfish sharks (Squaliformes) and the electric rays (Torpediniformes), and may be more widespread.
In lipid histotrophy, the developing embryo is supplied with protein and lipid-enriched histotroph through specialized finger-like structures known as trophonemata. The additional nutrition provided by the enriched histotroph allows the embryo to increase in mass from the egg by several orders of magnitude by the time it is born, much greater than is possible in mucoid histotrophy. This form of histotrophy is found in stingrays and their relatives (Myliobatiformes).
Document 2:::
External fertilization is a mode of reproduction in which a male organism's sperm fertilizes a female organism's egg outside of the female's body.
It is contrasted with internal fertilization, in which sperm are introduced via insemination and then combine with an egg inside the body of a female organism. External fertilization typically occurs in water or a moist area to facilitate the movement of sperm to the egg. The release of eggs and sperm into the water is known as spawning. In motile species, spawning females often travel to a suitable location to release their eggs.
However, sessile species are less able to move to spawning locations and must release gametes locally. Among vertebrates, external fertilization is most common in amphibians and fish. Invertebrates utilizing external fertilization are mostly benthic, sessile, or both, including animals such as coral, sea anemones, and tube-dwelling polychaetes. Benthic marine plants also use external fertilization to reproduce. Environmental factors and timing are key challenges to the success of external fertilization. While in the water, the male and female must both release gametes at similar times in order to fertilize the egg. Gametes spawned into the water may also be washed away, eaten, or damaged by external factors.
Sexual selection
Sexual selection may not seem to occur during external fertilization, but there are ways it actually can. The two types of external fertilizers are nest builders and broadcast spawners. For female nest builders, the main choice is the location of where to lay her eggs. A female can choose a nest close to the male she wants to fertilize her eggs, but there is no guarantee that the preferred male will fertilize any of the eggs. Broadcast spawners have a very weak selection, due to the randomness of releasing gametes. To look into the effect of female choice on external fertilization, an in vitro sperm competition experiment was performed. The results concluded that ther
Document 3:::
Fish go through various life stages between fertilization and adulthood. The life of a fish start as spawned eggs which hatch into immotile larvae. These larval hatchlings are not yet capable of feeding themselves and carry a yolk sac which provides stored nutrition. Before the yolk sac completely disappears, the young fish must mature enough to be able to forage independently. When they have developed to the point where they are capable of feeding by themselves, the fish are called fry. When, in addition, they have developed scales and working fins, the transition to a juvenile fish is complete and it is called a fingerling, so called as they are typically about the size of human fingers. The juvenile stage lasts until the fish is fully grown, sexually mature and interacting with other adult fish.
Growth stages
Ichthyoplankton (planktonic or drifting fish) are the eggs and larvae of fish. They are usually found in the sunlit zone of the water column, less than 200 metres deep, sometimes called the epipelagic or photic zone. Ichthyoplankton are planktonic, meaning they cannot swim effectively under their own power, but must drift with ocean currents. Fish eggs cannot swim at all, and are unambiguously planktonic. Early stage larvae swim poorly, but later stage larvae swim better and cease to be planktonic as they grow into juveniles. Fish larvae are part of the zooplankton that eat smaller plankton, while fish eggs carry their own food supply. Both eggs and larvae are themselves eaten by larger animals.
According to Kendall et al. 1984 there are three main developmental stages of fish:
Egg stage: From spawning to hatching. This stage is named so, instead of being called an embryonic stage, because there are aspects, such as those to do with the egg envelope, that are not just embryonic aspects.
Larval stage: From the eggs hatching till to when fin rays are present and the growth of protective scales has started (squamation). A key event is when the notochord
Document 4:::
Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In ovoviviparous fish like shark, what develops inside the mother’s body but without nourishment from the mother?
A. genes
B. spores
C. eggs
D. molecules
Answer:
|
|
sciq-7926
|
multiple_choice
|
What phase exists when all the water in a container has physical properties intermediate between those of the gaseous and liquid states?
|
[
"impregnation fluid",
"stationary fluid",
"hydrothermal fluid",
"supercritical fluid"
] |
D
|
Relavent Documents:
Document 0:::
In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point.
Types of phase transition
States of matter
Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point, the two phases involved - liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable.
Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table:
For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram. Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point. In exception to the usual case, it is sometimes possible to change the state of a system dia
Document 1:::
A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, volume, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium.
Overview
Common components of a phase diagram are lines of equilibrium or phase boundaries, which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases.
Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( and a partial vapor pressure of ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question.
The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a "slurry").
Working fluids are often categorized on the basis of the shape of their phase diagram.
Types
2-dimensional diagrams
Pressure vs temperature
The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water. The axes correspond to the pressure and temperature. The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid, liquid, and gas.
The curves on the phase diagram show the po
Document 2:::
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field.
Liquid–vapor critical point
Overview
For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one.
The figure to the right shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point.
The critical point of water occurs at and .
In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor diele
Document 3:::
A liquid–liquid critical point (or LLCP) is the endpoint of a liquid–liquid phase transition line (LLPT); it is a critical point where two types of local structures coexist at the exact ratio of unity. This hypothesis was first developed by Peter Poole, Francesco Sciortino, Uli Essmann and H. Eugene Stanley in Boston to obtain a quantitative understanding of the huge number of anomalies present in water.
Near a liquid–liquid critical point, there is always a competition between two alternative local structures. For instance, in supercooled water, two types of local structures have been predicted: a low-density local configuration (LD) and a high-density local configuration (HD), so above the critical pressure, the liquid is composed by a majority of HD local structure, while below the critical pressure a higher fraction of LD local configurations is present. The ratio between HD and LD configurations is determined according to the thermodynamic equilibrium of the system, which is often governed by external variables such as pressure and temperature.
The liquid–liquid critical point theory can be applied to several liquids that possess the tetrahedral symmetry. The study of liquid–liquid critical points is an active research area with hundreds of articles having been published, though only a few of these investigations have been experimental since most modern probing techniques are not fast and/or sensitive enough to study them.
Document 4:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What phase exists when all the water in a container has physical properties intermediate between those of the gaseous and liquid states?
A. impregnation fluid
B. stationary fluid
C. hydrothermal fluid
D. supercritical fluid
Answer:
|
|
sciq-11358
|
multiple_choice
|
What device is used to study charge?
|
[
"microscope",
"microtome",
"nannostomus",
"electroscope"
] |
D
|
Relavent Documents:
Document 0:::
The Biopac Student Lab is a proprietary teaching device and method introduced in 1995 as a digital replacement for aging chart recorders and oscilloscopes that were widely used in undergraduate teaching laboratories prior to that time. It is manufactured by BIOPAC Systems, Inc., of Goleta, California. The advent of low cost personal computers meant that older analog technologies could be replaced with powerful and less expensive computerized alternatives.
Students in undergraduate teaching labs use the BSL system to record data from their own bodies, animals or tissue preparations. The BSL system integrates hardware, software and curriculum materials including over sixty experiments that students use to study the cardiovascular system, muscles, pulmonary function, autonomic nervous system, and the brain.
History of physiology and electricity
One of the more complicated concepts for students to grasp is the fact that electricity is flowing throughout a living body at all times and that it is possible to use the signals to measure the performance and health of individual parts of the body. The Biopac Student Lab System helps to explain the concept and allows students to understand physiology.
Physiology and electricity share a common history, with some of the pioneering work in each field being done in the late 18th century by Count Alessandro Giuseppe Antonio Anastasio Volta and Luigi Galvani. Count Volta invented the battery and had a unit of electrical measurement named in his honor (the Volt). These early researchers studied "animal electricity" and were among the first to realize that applying an electrical signal to an isolated animal muscle caused it to twitch. The Biopac Student Lab uses procedures similar to Count Volta’s to demonstrate how muscles can be electrically stimulated.
Concept
The BSL system includes data acquisition hardware with built-in universal amplifiers to record and condition electrical signals from the heart, muscle, nerve, brain, eye,
Document 1:::
The Instrument Room is a room in Teylers Museum which houses a part of the museum's Cabinet of Physics: a collection of scientific instruments from the 18th and 19th centuries. The instruments in the collection were used for research as well as for educational public demonstrations. Most of them are demonstration models that illustrate various aspects of electricity, acoustics, light, magnetism, thermodynamics, and weights and measures. The rest are high-quality precision instruments that were used for research.
History of the room
Originally all of the museum's collections were housed in the Oval Room from 1784. The electricity instrument demonstrations tended to make a lot of noise and distracted the readers of the books in the gallery, and after the mineralogical cabinet was built for the center of the room, demonstrations there became more difficult and a new demonstration and lecture room was built on the north side (today the Print room). This new room shared its purpose with the art gallery but as the number of instrument cabinets increased, was felt to be too dark, leading to the creation of a separate painting gallery in 1838. The current instrument room was built as part of an 1880-1885 extension of the museum, designed to have daylight from both sides for better viewing of the experiments. It is located between the Fossil Room II and the Oval Room.
History of the collection
Though Pieter Teyler van der Hulst was a patron of the arts and sciences, he was not a member of the Natuur- en Sterrekundig Collegie, a science society in Haarlem that was founded in the Patientiestraat in 1775. The popularity of the study of science and the ideals of the Dutch enlightenment were such that after his death however, when Martin van Marum joined the young Teylers Stichting, this proved quickly to become the emphasis of the society in the years to come. Teylers Museum was not alone. The society Oefening door Wetenschappen was also started in Haarlem in 1798 and lasted u
Document 2:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 3:::
A string galvanometer is a sensitive fast-responding measuring instrument that uses a single fine filament of wire suspended in a strong magnetic field to measure small currents. In use, a strong light source is used to illuminate the fine filament, and the optical system magnifies the movement of the filament allowing it to be observed or recorded by photography.
The principle of the string galvanometer remained in use for electrocardiograms until the advent of electronic vacuum-tube amplifiers in the 1920s.
History
Submarine cable telegraph systems of the late 19th century used a galvanometer to detect pulses of electric current, which could be observed and transcribed into a message. The speed at which pulses could be detected by the galvanometer was limited by its mechanical inertia, and by the inductance of the multi-turn coil used in the instrument. Clément Adair, a French engineer, replaced the coil with a much faster wire or "string" producing the first string galvanometer.
For most telegraphic purposes it was sufficient to detect the existence of a pulse. In 1892 André Blondel described the dynamic properties of an instrument that could measure the wave shape of an electrical impulse, an oscillograph.
Augustus Waller had discovered electrical activity from the heart and produced the first electrocardiogram in 1887. But his equipment was slow. Physiologists worked to find a better instrument. In 1901, Willem Einthoven described the science background and potential utility of a string galvanometer, stating "Mr. Adair as already built an instrument with a wires stretched between poles of a magnet. It was a telegraph receiver." Einthoven developed a sensitive form of string galvanomter that allowed photographic recording of the impulses associated with the heat beat. He was a leader in applying the string galvanometer to physiology and medicine, leading to today's electrocardiography. Einthoven was awarded the 1924 Nobel prize in Physiology or Medicine f
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What device is used to study charge?
A. microscope
B. microtome
C. nannostomus
D. electroscope
Answer:
|
|
sciq-622
|
multiple_choice
|
What determines how strongly an atom attracts electrons to itself?
|
[
"gravity",
"electronegativity",
"enthalpy",
"ionization"
] |
B
|
Relavent Documents:
Document 0:::
This page deals with the electron affinity as a property of isolated atoms or molecules (i.e. in the gas phase). Solid state electron affinities are not listed here.
Elements
Electron affinity can be defined in two equivalent ways. First, as the energy that is released by adding an electron to an isolated gaseous atom. The second (reverse) definition is that electron affinity is the energy required to remove an electron from a singly charged gaseous negative ion. The latter can be regarded as the ionization energy of the –1 ion or the zeroth ionization energy. Either convention can be used.
Negative electron affinities can be used in those cases where electron capture requires energy, i.e. when capture can occur only if the impinging electron has a kinetic energy large enough to excite a resonance of the atom-plus-electron system. Conversely electron removal from the anion formed in this way releases energy, which is carried out by the freed electron as kinetic energy. Negative ions formed in these cases are always unstable. They may have lifetimes of the order of microseconds to milliseconds, and invariably autodetach after some time.
Molecules
The electron affinities Eea of some molecules are given in the table below, from the lightest to the heaviest. Many more have been listed by . The electron affinities of the radicals OH and SH are the most precisely known of all molecular electron affinities.
Second and third electron affinity
Bibliography
.
.
Updated values can be found in the NIST chemistry webbook for around three dozen elements and close to 400 compounds.
Specific molecules
Document 1:::
In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron.
The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell.
An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond.
Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.
Overview
Electron configuration
The electrons that determine valence – how an atom reacts chemically – are those with the highest energy.
For a main-group element, the valence electrons are defined as those electrons residing in the e
Document 2:::
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry.
Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge.
Elementary definition
Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition.
A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th
Document 3:::
The electron affinity (Eea) of an atom or molecule is defined as the amount of energy released when an electron attaches to a neutral atom or molecule in the gaseous state to form an anion.
X(g) + e− → X−(g) + energy
This differs by sign from the energy change of electron capture ionization. The electron affinity is positive when energy is released on electron capture.
In solid state physics, the electron affinity for a surface is defined somewhat differently (see below).
Measurement and use of electron affinity
This property is used to measure atoms and molecules in the gaseous state only, since in a solid or liquid state their energy levels would be changed by contact with other atoms or molecules.
A list of the electron affinities was used by Robert S. Mulliken to develop an electronegativity scale for atoms, equal to the average of the electrons
affinity and ionization potential. Other theoretical concepts that use electron affinity include electronic chemical potential and chemical hardness. Another example, a molecule or atom that has a more positive value of electron affinity than another is often called an electron acceptor and the less positive an electron donor. Together they may undergo charge-transfer reactions.
Sign convention
To use electron affinities properly, it is essential to keep track of sign. For any reaction that releases energy, the change ΔE in total energy has a negative value and the reaction is called an exothermic process. Electron capture for almost all non-noble gas atoms involves the release of energy and thus is exothermic. The positive values that are listed in tables of Eea are amounts or magnitudes. It is the word "released" within the definition "energy released" that supplies the negative sign to ΔE. Confusion arises in mistaking Eea for a change in energy, ΔE, in which case the positive values listed in tables would be for an endo- not exo-thermic process. The relation between the two is Eea = −ΔE(attach).
However, if
Document 4:::
In quantum chemistry, Slater's rules provide numerical values for the effective nuclear charge in a many-electron atom. Each electron is said to experience less than the actual nuclear charge, because of shielding or screening by the other electrons. For each electron in an atom, Slater's rules provide a value for the screening constant, denoted by s, S, or σ, which relates the effective and actual nuclear charges as
The rules were devised semi-empirically by John C. Slater and published in 1930.
Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s.
Rules
Firstly, the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together.
[1s] [2s,2p] [3s,3p] [3d] [4s,4p] [4d] [4f] [5s, 5p] [5d] etc.
Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it.
The shielding constant for each group is formed as the sum of the following contributions:
An amount of 0.35 from each other electron within the same group except for the [1s] group, where the other electron contributes only 0.30.
If the group is of the [ns, np] type, an amount of 0.85 from each electron with principal quantum number (n–1), and an amount of 1.00 for each electron with principal quantum number (n–2) or less.
If the group is of the [d] or [f], type, an amount of 1.00 for each electron "closer" to the nucleus than the group. This includes both i) electrons with a smaller principal quantum number than n and ii) electrons with principal quantum number n and a smaller azimuthal quantum number l.
In tabular form, the rules are summarized as:
Example
An example provided in Slater's original paper is for the iron atom which has nuclear charge 26 and electronic configuration
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What determines how strongly an atom attracts electrons to itself?
A. gravity
B. electronegativity
C. enthalpy
D. ionization
Answer:
|
|
sciq-10360
|
multiple_choice
|
It was not until the era of the ancient greeks that we have any record of how people tried to explain the chemical changes they observed and used. at that time, natural objects were thought to consist of only four basic elements: earth, air, fire, and this?
|
[
"sky",
"soul",
"water",
"grass"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
In chemistry, the history of molecular theory traces the origins of the concept or idea of the existence of strong chemical bonds between two or more atoms.
A modern conceptualization of molecules began to develop in the 19th century along with experimental evidence for pure chemical elements and how individual atoms of different chemical elements such as hydrogen and oxygen can combine to form chemically stable molecules such as water molecules.
Ancient world
The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids.
Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. Prior to this, Heraclitus had claimed that fire or change was fundamental to our existence, created through the combination of opposite properties.
In the Timaeus, Plato, following Pythagoras, considered mathematical entities such as number, point, line and triangle as the fundamental building blocks or elements of this ephemeral world, and considered the four elements of fire, air, water and earth as states of substances through which the true mathematical principles or elements would pass. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies.
The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe.
Greek atomism
The earliest views on the shapes and connectivity of atoms was that proposed by Leucippus, Democritus, and Epicurus who reasoned that the solidness of the material corresponded to the shape of the atoms involved. Thus, iron atoms are solid and strong with hooks that lock them into a solid; water atoms are smooth and slippery; salt atoms, because of their taste, are sharp
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
It was not until the era of the ancient greeks that we have any record of how people tried to explain the chemical changes they observed and used. at that time, natural objects were thought to consist of only four basic elements: earth, air, fire, and this?
A. sky
B. soul
C. water
D. grass
Answer:
|
|
sciq-7388
|
multiple_choice
|
There are two types of digestion, mechanical and what else?
|
[
"mineral",
"chemical",
"radiation",
"thermal"
] |
B
|
Relavent Documents:
Document 0:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 1:::
Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply.
Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones.
Concepts
Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones.
Food science
The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety.
Genetic engineering
Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind.
Among the most notable applications of
Document 2:::
Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology.
Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example.
Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties.
Definition
The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing".
Disciplines
Some of the subdisciplines of food science are described below.
Food chemistry
Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk.
It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This
Document 3:::
' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver.
Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase.
Examples of biological assimilation
Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells.
Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae.
Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
There are two types of digestion, mechanical and what else?
A. mineral
B. chemical
C. radiation
D. thermal
Answer:
|
|
scienceQA-2069
|
multiple_choice
|
Which of these organisms contains matter that was once part of the lichen?
|
[
"grizzly bear",
"snowy owl",
"bilberry",
"brown lemming"
] |
A
|
Use the arrows to follow how matter moves through this food web. For each answer choice, try to find a path of arrows that starts from the lichen.
The only arrow pointing to the snowy owl starts from the short-tailed weasel. The only arrow pointing to the short-tailed weasel starts from the brown lemming. The brown lemming has two arrows pointing to it. These arrows start from the bear sedge and the bilberry. Neither the bear sedge nor the bilberry has any arrows pointing to it. So, in this food web, matter does not move from the lichen to the snowy owl.There is one path matter can take from the lichen to the grizzly bear: lichen->barren-ground caribou->grizzly bear. There are two paths matter can take from the lichen to the mushroom: lichen->barren-ground caribou->mushroom. lichen->barren-ground caribou->grizzly bear->mushroom. brown lemming. The brown lemming has two arrows pointing to it. These arrows start from the bear sedge and the bilberry. Neither the bear sedge nor the bilberry has any arrows pointing to it. So, in this food web, matter does not move from the lichen to the brown lemming.. bilberry. The bilberry does not have any arrows pointing to it. So, in this food web, matter does not move from the lichen to the bilberry..
|
Relavent Documents:
Document 0:::
Edible lichens are lichens that have a cultural history of use as a food. Although almost all lichen are edible (with some notable poisonous exceptions like the wolf lichen, powdered sunshine lichen, and the ground lichen), not all have a cultural history of usage as an edible lichen. Often lichens are merely famine foods eaten in times of dire needs, but in some cultures lichens are a staple food or even a delicacy.
Uses
Although there are many lichen species throughout the world, only a few species of lichen are known to be both edible and provide any nutrition. Two problems often encountered with eating lichens is that they usually contain mildly toxic secondary compounds, and that lichen polysaccharides are generally indigestible to humans. Many human cultures have discovered preparation techniques to overcome these problems. Lichens are often thoroughly washed, boiled, or soaked in ash water to help remove secondary compounds.
Recent analytics within the field have identified 15 kinds of edible lichen, which have been mostly found in China. Due to its rubbery consistency, individuals within China fry, boil, and pressure-cook edible lichens. Further, edible lichens can be made into beverages such as tea.
In the past Iceland moss (Cetraria islandica) was an important human food in northern Europe and Scandinavia, and was cooked in many different ways, such as bread, porridge, pudding, soup, or salad. Bryoria fremontii was an important food in parts of North America, where it was usually pitcooked. It is even featured in a Secwepemc story. Reindeer lichen (Cladonia spp.) is a staple food of reindeer and caribou in the Arctic. Northern peoples in North America and Siberia traditionally eat the partially digested lichen after they remove it from the rumen of caribou that have been killed. It is often called 'stomach icecream'. Rock tripe (Umbilicaria spp. and Lasalia spp.) is a lichen that has frequently been used as an emergency food in North America.
One spe
Document 1:::
Buellia frigida is a species of saxicolous (rock-dwelling), crustose lichen in the family Caliciaceae. It was first described from samples collected from the British National Antarctic Expedition of 1901–1904. It is endemic to maritime and continental Antarctica, where it is common and widespread, at altitudes up to about . This resilient lichen has a characteristic appearance, typically featuring shades of grey and black divided into small polygonal patterns. The crusts can generally grow up to in diameter (smaller sizes are more common), although neighbouring individuals may coalesce to form larger crusts. One of the defining characteristics of the lichen is a textured surface with deep cracks, creating the appearance of radiating . These lobes, bordered by shallower fissures, give the lichen a unique visual texture.
In addition to its striking appearance, Buellia frigida exhibits remarkable adaptability to the harsh Antarctic climate. The lichen has an extremely slow growth rate, estimated to be less than per century. Because of its ability to not only endure but to thrive in one of the Earth's coldest, harshest environments, Buellia frigida has been used frequently as a model organism in astrobiology research. This lichen has been exposed to conditions simulating those encountered in space and on celestial bodies like Mars, including vacuum, UV radiation, and extreme dryness. B. frigida has demonstrated resilience to these space-related stressors, making it a candidate for studying how life can adapt to and potentially survive in the extreme environments found beyond Earth.
Taxonomy
The lichen was formally described as a new species in 1910 by the British botanist Otto Derbishire. The type specimen were collected in 1902 by Reginald Koettlitz from Granite Harbour in McMurdo Sound; they were found growing on tuff. This and other samples were obtained as part of the British National Antarctic Expedition of 1901–1904. The of the lichen was as follows (translat
Document 2:::
A lichenicolous fungus' (from Latin -cola 'inhabitant'; akin to Latin colere 'to inhabit') is a parasitic fungus that only lives on lichen as the host. A lichenicolous fungus is not the same as the fungus that is the component of the lichen, which is known as a lichenized fungus. They are most commonly specific to a given fungus as the host, but they also include a wide range of pathogens, saprotrophs, and commensals. It is estimated there are 3000 species of lichenicolous fungi. More than 1800 species are already described among the Ascomycota and Basidiomycota. More than 95% of lichenicolous fungi described as of 2003 are ascomycetes, in 7 classes and 19 orders. Although basidiomycetes have less than 5% of lichenicolous lichen species, they represent 4 classes and 8 orders. Many lichenicolous species have yet to be assigned a phylogenetic position as of 2003.
See also
List of lichenicolous fungi of Iceland
Document 3:::
The British Lichen Society (BLS) was founded in 1958 with the objective of promoting the study and conservation of lichen. Although the society was founded in London, UK, it is also of relevance to lichens worldwide. It has been a registered charity (number 228850) since 1964.
History
At the instigation of Dougal Swinscow, the first meeting of the society was held at the British Museum on 1 February 1958; there were 24 attendees. Several positions were decided: Arthur Edward Wade was elected as the secretary, Peter Wilfred James as the editor and recorder, Joseph Peterken as the treasurer, David Smith the librarian, and Swinscow as curator and assistant editor. Another founder was Ursula Katherine Duncan.
A tenth-anniversary symposium, held jointly with the British Mycological Society, was held on 27 September 1968. In 1983, the BLS held its silver jubilee celebrations to commemorate 25 years since its founding. A one-day lichenology symposium was held at the Natural History Museum, London, covering the topics ecophysiology, ecology, and lichenology in the Southern Hemisphere.
Lichenologist Oliver Gilbert, former president of the BLS and editor of the organisation’s publications, wrote the book The Lichen Hunters in 2004; according to the blurb on the dust jacket, it is "part travelogue and part social history of the British Lichen Society from ... 1958 to the present".
Activities and publications
A series of events are held each year led by members of the society. These include field and indoor meetings and training events. In conjunction and with support from the BLS, the Field Studies Council started giving field courses on lichens in 1958, initially led by Arthur Wade and held at the Malham Tarn Field Studies Centre. These courses helped increase awareness and interest in field lichenology in the British Isles. In 1964, the BLS undertook the Society Distribution Maps Scheme, a major citizen science project led by Mark Seaward. This effort ultimately resulted
Document 4:::
Symbiosis in lichens is the mutually beneficial symbiotic relationship of green algae and/or blue-green algae (cyanobacteria) living among filaments of a fungus, forming lichen.
Living as a symbiont in a lichen appears to be a successful way for a fungus to derive essential nutrients, as about 20% of all fungal species have adopted this mode of life. The autotrophic symbionts occurring in lichens are a wide variety of simple, photosynthetic organisms commonly and traditionally known as “algae”. These symbionts include both prokaryotic and eukaryotic organisms.
Overview of lichens
"Lichens are fungi that have discovered agriculture" — Trevor Goward
A lichen is a combination of fungus and/or algae and/or cyanobacteria that has a very different form (morphology), physiology, and biochemistry than any of the constituent species growing separately. The algae or cyanobacteria benefit their fungal partner by producing organic carbon compounds through photosynthesis. In return, the fungal partner benefits the algae or cyanobacteria by protecting them from the environment by its filaments, which also gather moisture and nutrients from the environment, and (usually) provide an anchor to it.
The majority of the lichens contain eukaryotic autotrophs belonging to the Chlorophyta (green algae) or to the Xanthophyta (yellow-green algae). About 90% of all known lichens have a green alga as a symbiont. Among these, Trebouxia is the most common genus, occurring in about 20% of all lichens. The second most commonly represented green alga genus is Trentepohlia. Overall, about 100 species are known to occur as autotrophs in lichens. All the algae and cyanobacteria are believed to be able to survive separately, as well as within the lichen; that is, at present no algae or cyanobacteria are known which can only survive naturally as part of a lichen. Common algal partners are Trebouxia, Pseudotrebouxia, or Myrmecia.
The prokaryotes belong to the Cyanobacteria, which are often called
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of these organisms contains matter that was once part of the lichen?
A. grizzly bear
B. snowy owl
C. bilberry
D. brown lemming
Answer:
|
sciq-11024
|
multiple_choice
|
Earthquakes at convergent plate boundaries mark the location of the what?
|
[
"abducting lithosphere",
"subducting lithosphere",
"speeding lithosphere",
"shorting lithosphere"
] |
B
|
Relavent Documents:
Document 0:::
Margaret Armstrong is an Australian geostatistician, mathematical geoscientist, and textbook author. She works as an associate professor in the School of Applied Mathematics at the Fundação Getúlio Vargas in Brazil, and as a research associate in the Centre for Industrial Economics of Mines ParisTech in France.
Education
Armstrong graduated from the University of Queensland in 1972, with a bachelor's degree in mathematics and a diploma of education. After working as a mathematics teacher she returned to graduate study, first with a master's degree in mathematics from Queensland in 1977, and then with Georges Matheron at the École des Mines de Paris. She completed her doctorate there in 1980.
Books
Armstrong is the author of the textbook Basic Linear Geostatistics (Springer, 1998), and co-author of the book Plurigaussian Simulations in Geosciences (Springer, 2003; 2nd ed., 2011). With Matheron, she edited Geostatistical Case Studies (Springer, 1987).
Recognition
In 1998, Armstrong was the winner of the John Cedric Griffiths Teaching Award of the International Association for Mathematical Geosciences. The award statement noted "her aptitude at the blackboard", the international demand for her short courses, and the "great clarity" of her book Basic Linear Geostatistics.
Document 1:::
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
The D″ region
The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r
Document 2:::
In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range.
Overview
In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates.
Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments.
An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia.
Paleontological use
When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin).
Document 3:::
Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage. The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.
Theory
Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, and thermal variations reflect and refract seismic waves. The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique.
Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.
History
Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources. These became more widely available in the 1960s with the expansion of global seismic networks, and in the 1970s when digital seismograph data archives were
Document 4:::
The National Institute of Geophysics and Volcanology (, INGV) is a research institute for geophysics and volcanology in Italy.
INGV is funded by the Italian Ministry of Education, Universities and Research. Its main responsibilities within the Italian civil protection system are the maintenance and monitoring of the national networks for seismic and volcanic phenomena, together with outreach and educational activities for the Italian population. The institute employs around 2000 people distributed between the headquarters in Rome and the other sections in Milan, Bologna, Pisa, Naples, Catania and Palermo.
INGV is amongst the top 20 research institutions in terms of scientific publications production. It participates and coordinates several EU research projects and organizes international scientific meetings in collaboration with other institutions.
Presidents
September 29, 1999 – August 11, 2011:
August 12, 2011 – December 21, 2011:
March 21, 2012 -April 27, 2016 : (since December 21, 2011 acting president)
April 28, 2016 – present: .
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Earthquakes at convergent plate boundaries mark the location of the what?
A. abducting lithosphere
B. subducting lithosphere
C. speeding lithosphere
D. shorting lithosphere
Answer:
|
|
sciq-6884
|
multiple_choice
|
What are the narrowest blood vessels, where oxygen is transferred into body cells?
|
[
"muscles",
"capillaries",
"viens",
"arteries"
] |
B
|
Relavent Documents:
Document 0:::
Veins () are blood vessels in the circulatory system of humans and most other animals that carry blood toward the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are those of the pulmonary and fetal circulations which carry oxygenated blood to the heart. In the systemic circulation arteries carry oxygenated blood away from the heart, and veins return deoxygenated blood to the heart, in the deep veins.
There are three sizes of veins, large, medium, and small. Smaller veins are called venules, and the smallest the post-capillary venules are microscopic that make up the veins of the microcirculation. Veins are often closer to the skin than arteries.
Veins have less smooth muscle and connective tissue and wider internal diameters than arteries. Because of their thinner walls and wider lumens they are able to expand and hold more blood. This greater capacity gives them the term of capacitance vessels. At any time, nearly 70% of the total volume of blood in the human body is in the veins. In medium and large sized veins the flow of blood is maintained by one-way (unidirectional) venous valves to prevent backflow. In the lower limbs this is also aided by muscle pumps, also known as venous pumps that exert pressure on intramuscular veins when they contract and drive blood back to the heart.
Structure
There are three sizes of vein, large, medium, and small. Smaller veins are called venules. The smallest veins are the post-capillary venules. Veins have a similar three-layered structure to arteries. The layers known as tunicae have a concentric arrangement that forms the wall of the vessel. The outer layer, is a thick layer of connective tissue called the tunica externa or adventitia; this layer is absent in the post-capillary venules. The middle layer, consists of bands of smooth muscle and is known as the tunica media. The inner layer, is a thin lining of endothelium known as the tunica intima. The tunica media in the veins is mu
Document 1:::
Great vessels are the large vessels that bring blood to and from the heart. These are:
Superior vena cava
Inferior vena cava
Pulmonary arteries
Pulmonary veins
Aorta
Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels.
Document 2:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 3:::
The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. Endothelial cells form the barrier between vessels and tissue and control the flow of substances and fluid into and out of a tissue.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between v
Document 4:::
Vascular recruitment is the increase in the number of perfused capillaries in response to a stimulus. I.e., the more you exercise regularly, the more oxygen can reach your muscles.
Vascular recruitment may also be called capillary recruitment.
Vascular recruitment in skeletal muscle
The term «vascular recruitment» or «capillary recruitment» usually refers to the increase in the number perfused capillaries in skeletal muscle in response to a stimulus. The most important stimulus in humans is regular exercise. Vascular recruitment in skeletal muscle is thought to enhance the capillary surface area for oxygen exchange and decrease the oxygen diffusion distance.
Other stimuli are possible. Insulin can act as a stimulus for vascular recruitment in skeletal muscle. This process may also improve glucose delivery to skeletal muscle by increasing the surface area for diffusion. That insulin can act in this way has been proposed based on increases in limb blood flow and skeletal muscle blood volume which occurred after hyperinsulinemia.
The exact extent of capillary recruitment in intact skeletal muscle in response to regular exercise or insulin is unknown, because non-invasive measurement techniques are not yet extremely precise.
Being overweight or obese may negatively interfere with vascular recruitment in skeletal muscle.
Vascular recruitment in the lung
Vascular recruitment in the lung (i.e., in the pulmonary microcirculation) may be noteworthy to healthcare professionals in emergency medicine, because it may increase evidence of lung injury, and increase pulmonary capillary protein leak.
Vascular recruitment in the brain
Vascular recruitment in the brain is thought to lead to new capillaries and increase the cerebral blood flow.
Controversy
The existence of vascular recruitment in response to a stimulus has been disputed by some researchers. However, most researchers accept that vascular recruitment exists.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the narrowest blood vessels, where oxygen is transferred into body cells?
A. muscles
B. capillaries
C. viens
D. arteries
Answer:
|
|
sciq-466
|
multiple_choice
|
What happens to the temperature of matter as light is absorbed?
|
[
"it stays the same",
"it increases",
"it triples",
"it drops"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens to the temperature of matter as light is absorbed?
A. it stays the same
B. it increases
C. it triples
D. it drops
Answer:
|
|
sciq-2698
|
multiple_choice
|
What is used to measure electric current?
|
[
"chronometer",
"galvanometer",
"anemometer",
"atomizer"
] |
B
|
Relavent Documents:
Document 0:::
Electrical measurements are the methods, devices and calculations used to measure electrical quantities. Measurement of electrical quantities may be done to measure electrical parameters of a system. Using transducers, physical properties such as temperature, pressure, flow, force, and many others can be converted into electrical signals, which can then be conveniently measured and recorded. High-precision laboratory measurements of electrical quantities are used in experiments to determine fundamental physical properties such as the charge of the electron or the speed of light, and in the definition of the units for electrical measurements, with precision in some cases on the order of a few parts per million. Less precise measurements are required every day in industrial practice. Electrical measurements are a branch of the science of metrology.
Measurable independent and semi-independent electrical quantities comprise:
Voltage
Electric current
Electrical resistance and electrical conductance
Electrical reactance and susceptance
Magnetic flux
Electrical charge by the means of electrometer
Partial discharge measurement
Magnetic field by the means of Hall sensor
Electric field
Electrical power by the means of electricity meter
S-matrix by the means of network analyzer (electrical)
Electrical power spectrum by the means of spectrum analyzer
Measurable dependent electrical quantities comprise:
Inductance
Capacitance
Electrical impedance defined as vector sum of electrical resistance and electrical reactance
Electrical admittance, the reciprocal of electrical impedance
Phase between current and voltage and related power factor
Electrical spectral density
Electrical phase noise
Electrical amplitude noise
Transconductance
Transimpedance
Electrical power gain
Voltage gain
Current gain
Frequency
Propagation delay
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
A string galvanometer is a sensitive fast-responding measuring instrument that uses a single fine filament of wire suspended in a strong magnetic field to measure small currents. In use, a strong light source is used to illuminate the fine filament, and the optical system magnifies the movement of the filament allowing it to be observed or recorded by photography.
The principle of the string galvanometer remained in use for electrocardiograms until the advent of electronic vacuum-tube amplifiers in the 1920s.
History
Submarine cable telegraph systems of the late 19th century used a galvanometer to detect pulses of electric current, which could be observed and transcribed into a message. The speed at which pulses could be detected by the galvanometer was limited by its mechanical inertia, and by the inductance of the multi-turn coil used in the instrument. Clément Adair, a French engineer, replaced the coil with a much faster wire or "string" producing the first string galvanometer.
For most telegraphic purposes it was sufficient to detect the existence of a pulse. In 1892 André Blondel described the dynamic properties of an instrument that could measure the wave shape of an electrical impulse, an oscillograph.
Augustus Waller had discovered electrical activity from the heart and produced the first electrocardiogram in 1887. But his equipment was slow. Physiologists worked to find a better instrument. In 1901, Willem Einthoven described the science background and potential utility of a string galvanometer, stating "Mr. Adair as already built an instrument with a wires stretched between poles of a magnet. It was a telegraph receiver." Einthoven developed a sensitive form of string galvanomter that allowed photographic recording of the impulses associated with the heat beat. He was a leader in applying the string galvanometer to physiology and medicine, leading to today's electrocardiography. Einthoven was awarded the 1924 Nobel prize in Physiology or Medicine f
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
Ayrton shunt or universal shunt is a high-resistance shunt used in galvanometers to increase their range without changing the damping.
The circuit is named after its inventor William E. Ayrton. Multirange ammeters that use this technique are more accurate than those using a make-before-break switch. Also it will eliminate the possibility of having a meter without a shunt which is a serious concern in make-before-break switches.
The selector switch changes the amount of resistance in parallel with Rm (meter resistance). The voltage drop across parallel branches is always equal. When all resistances are placed in parallel with Rm maximum sensitivity of ammeter is reached.
Ayrton shunt is rarely used for currents above 10 amperes.
m1 = I1/Im , m2 = I2/Im, m3 = I3/Im
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is used to measure electric current?
A. chronometer
B. galvanometer
C. anemometer
D. atomizer
Answer:
|
|
sciq-2493
|
multiple_choice
|
What body system gets rid of waste?
|
[
"digestive system",
"nervous system",
"Muscular system",
"excretory system"
] |
D
|
Relavent Documents:
Document 0:::
The excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles) most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating.
Only the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function.
As most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure.
Systems
Urinary system
The kidneys are large, bean-shaped organs which are present on each side of the vertebral column in the abdominal cavity. Humans have two kidneys and each kidney is supplied with blood from the renal artery. The kidneys remove from the blood the nitrogenous wastes such as urea, as well as salts and excess water, and excrete them in the form of urine. This is done with the help of millions of nephrons present in the kidney. The filtrated blood is carried away from the kidneys by the renal vein (or kidney vein). The urine from the kidney is collected by the ureter (or excretory tubes), one from each kidney, and is passed to the urinary bladder. The urinary bladder collects and stores the urine until urination. The urine collected in the bladder is passed into the external environment from the body through an opening called
Document 1:::
Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra.
Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body.
Urine plays an important role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles.
Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high.
Physiology
Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration. The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body.
Duration
Research looking at the duration
Document 2:::
The organs of Bojanus or Bojanus organs are excretory glands that serve the function of kidneys in some of the molluscs. In other words, these are metanephridia that are found in some molluscs, for example in the bivalves. Some other molluscs have another type of organ for excretion called Keber's organ.
The Bojanus organ is named after Ludwig Heinrich Bojanus, who first described it. The excretory system of a bivalve consists of a pair of kidneys called the organ of bojanus. These are situated one of each side of the body below the pericardium. Each kidney consist of 2 part (1)- glandular part (2)- a thin walled ciliated urinary bladder.
Document 3:::
The enteric nervous system (ENS) or intrinsic nervous system is one of the main divisions of the autonomic nervous system (ANS) and consists of a mesh-like system of neurons that governs the function of the gastrointestinal tract. It is capable of acting independently of the sympathetic and parasympathetic nervous systems, although it may be influenced by them. The ENS is nicknamed the "second brain". It is derived from neural crest cells.
The enteric nervous system is capable of operating independently of the brain and spinal cord, but does rely on innervation from the vagus nerve and prevertebral ganglia in healthy subjects. However, studies have shown that the system is operable with a severed vagus nerve. The neurons of the enteric nervous system control the motor functions of the system, in addition to the secretion of gastrointestinal enzymes. These neurons communicate through many neurotransmitters similar to the CNS, including acetylcholine, dopamine, and serotonin. The large presence of serotonin and dopamine in the gut are key areas of research for neurogastroenterologists.
Structure
The enteric nervous system in humans consists of some 500 million neurons (including the various types of Dogiel cells), 0.5% of the number of neurons in the brain, five times as many as the one hundred million neurons in the human spinal cord, and about as many as in the whole nervous system of a cat. The enteric nervous system is embedded in the lining of the gastrointestinal system, beginning in the esophagus and extending down to the anus.
The neurons of the ENS are collected into two types of ganglia: myenteric (Auerbach's) and submucosal (Meissner's) plexuses. Myenteric plexuses are located between the inner and outer layers of the muscularis externa, while submucosal plexuses are located in the submucosa.
Auerbach's plexus
Auerbach's plexus, also known as the myenteric plexus, is a collection of fibers and postganglionic autonomic cell bodies that lie betwe
Document 4:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What body system gets rid of waste?
A. digestive system
B. nervous system
C. Muscular system
D. excretory system
Answer:
|
|
sciq-9434
|
multiple_choice
|
What would you call a relationship where the bacteria benefit and and the other organism is harmed?
|
[
"pathology",
"symbiotic",
"parasitism",
"fungi"
] |
C
|
Relavent Documents:
Document 0:::
Symbiotic bacteria are bacteria living in symbiosis with another organism or each other. For example, rhizobia living in root nodules of legumes provide nitrogen fixing activity for these plants.
Types of symbiosis
Types of symbiotic relationships are mutualism, commensalism, parasitism, and amensalism.
Endosymbiosis
Endosymbionts live inside other organisms whether that be in their bodies or cells. The theory of endosymbiosis, as known as symbiogenesis, provides an explanation for the evolution of eukaryotic organisms. According to the theory of endosymbiosis for the origin of eukaryotic cells, scientists believe that eukaryotes originated from the relationship between two or more prokaryotic cells approximately 2.7 billion years ago. It is suggested that specifically ancestors of mitochondria and chloroplasts entered into an endosymbiotic relationship with another prokaryotic cell, eventually evolving into the eukaryotic cells that people are familiar with today.
Ectosymbiosis
Ectosymbiosis is defined as a symbiotic relationship in which one organism lives on the outside surface of a different organism. For instance, barnacles on whales is an example of an ectosymbiotic relationship where the whale provides the barnacle with a home, a ride, and access to food. The whale is not harmed, but it also does not receive any benefits so this is also an example of commensalism. An example of ectosymbiotic bacteria is cutibacterium acnes. These bacteria are involved in a symbiotic relationship with humans on whose skin they live. Cutibacterium acnes can cause acne when the skin becomes too oily, but they also reduce the skin's susceptibility to skin diseases caused by oxidative stress.
Symbiotic relationships
Certain plants establish a symbiotic relationship with bacteria, enabling them to produce nodules that facilitate the conversion of atmospheric nitrogen to ammonia. In this connection, cytokinins have been found to play a role in the development of root fixing n
Document 1:::
Bacteriology is the branch and specialty of biology that studies the morphology, ecology, genetics and biochemistry of bacteria as well as many other aspects related to them. This subdivision of microbiology involves the identification, classification, and characterization of bacterial species. Because of the similarity of thinking and working with microorganisms other than bacteria, such as protozoa, fungi, and viruses, there has been a tendency for the field of bacteriology to extend as microbiology. The terms were formerly often used interchangeably. However, bacteriology can be classified as a distinct science.
Overview
Definition
Bacteriology is the study of bacteria and their relation to medicine. Bacteriology evolved from physicians needing to apply the germ theory to address the concerns relating to disease spreading in hospitals the 19th century. Identification and characterizing of bacteria being associated to diseases led to advances in pathogenic bacteriology. Koch's postulates played a role into identifying the relationships between bacteria and specific diseases. Since then, bacteriology has played a role in successful advances in science such as bacterial vaccines like diphtheria toxoid and tetanus toxoid. Bacteriology can be studied and applied in many sub-fields relating to agriculture, marine biology, water pollution, bacterial genetics, veterinary medicine, biotechnology and others.
Bacteriologists
A bacteriologist is a microbiologist or other trained professional in bacteriology. Bacteriologists are interested in studying and learning about bacteria, as well as using their skills in clinical settings. This includes investigating properties of bacteria such as morphology, ecology, genetics and biochemistry, phylogenetics, genomics and many other areas related to bacteria like disease diagnostic testing. They can also work as medical scientists, veterinary scientists, or diagnostic technicians in locations like clinics, blood banks, hospitals
Document 2:::
Microorganisms engage in a wide variety of social interactions, including cooperation. A cooperative behavior is one that benefits an individual (the recipient) other than the one performing the behavior (the actor). This article outlines the various forms of cooperative interactions (mutualism and altruism) seen in microbial systems, as well as the benefits that might have driven the evolution of these complex behaviors.
Introduction
Microorganisms, or microbes, span all three domains of life – bacteria, archaea, and many unicellular eukaryotes including some fungi and protists. Typically defined as unicellular life forms that can only be observed with a microscope, microorganisms were the first cellular life forms, and were critical for creating the conditions for the evolution of more complex multicellular forms.
Although microbes are too small to see with the naked eye, they represent the overwhelming majority of biological diversity, and thus serve as an excellent system to study evolutionary questions. One such topic that scientists have examined in microbes is the evolution of social behaviors, including cooperation. A cooperative interaction benefits a recipient, and is selected for on that basis. In microbial systems, cells belonging to the same taxa have been documented partaking in cooperative interactions to perform a wide range of complex multicellular behaviors such as dispersal, foraging, construction of biofilms, reproduction, chemical warfare, and signaling. This article will outline the various forms of cooperative interactions seen in microbial systems, as well as the benefits that might have driven the evolution of these complex behaviors.
History
From an evolutionary point of view, a behavior is social if it has fitness consequences for both the individual that performs that behavior (the actor) and another individual (the recipient). Hamilton first categorized social behaviors according to whether the consequences they entail for the actor
Document 3:::
The hologenome theory of evolution recasts the individual animal or plant (and other multicellular organisms) as a community or a "holobiont" – the host plus all of its symbiotic microbes. Consequently, the collective genomes of the holobiont form a "hologenome". Holobionts and hologenomes are structural entities that replace misnomers in the context of host-microbiota symbioses such as superorganism (i.e., an integrated social unit composed of conspecifics), organ, and metagenome. Variation in the hologenome may encode phenotypic plasticity of the holobiont and can be subject to evolutionary changes caused by selection and drift, if portions of the hologenome are transmitted between generations with reasonable fidelity. One of the important outcomes of recasting the individual as a holobiont subject to evolutionary forces is that genetic variation in the hologenome can be brought about by changes in the host genome and also by changes in the microbiome, including new acquisitions of microbes, horizontal gene transfers, and changes in microbial abundance within hosts. Although there is a rich literature on binary host–microbe symbioses, the hologenome concept distinguishes itself by including the vast symbiotic complexity inherent in many multicellular hosts. For recent literature on holobionts and hologenomes published in an open access platform, see the following reference.
Origin
Lynn Margulis coined the term holobiont in her 1991 book Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis (MIT Press), though this was not in the context of diverse populations of microbes. The term holobiont is derived from the Ancient Greek ὅλος (hólos, "whole"), and the word biont for a unit of life.
In September 1994, Richard Jefferson coined the term hologenome when he introduced the hologenome theory of evolution at a presentation at Cold Spring Harbor Laboratory. At the CSH Symposium and earlier, the unsettling number and diversity of microbes that
Document 4:::
The branches of microbiology can be classified into pure and applied sciences. Microbiology can be also classified based on taxonomy, in the cases of bacteriology, mycology, protozoology, and phycology. There is considerable overlap between the specific branches of microbiology with each other and with other disciplines, and certain aspects of these branches can extend beyond the traditional scope of microbiology
In general the field of microbiology can be divided in the more fundamental branch (pure microbiology) and the applied microbiology (biotechnology). In the more fundamental field the organisms are studied as the subject itself on a deeper (theoretical) level.
Applied microbiology refers to the fields where the micro-organisms are applied in certain processes such as brewing or fermentation. The organisms itself are often not studied as such, but applied to sustain certain processes.
Pure microbiology
Bacteriology: the study of bacteria
Mycology: the study of fungi
Protozoology: the study of protozoa
Phycology/algology: the study of algae
Parasitology: the study of parasites
Immunology: the study of the immune system
Virology: the study of viruses
Nematology: the study of nematodes
Microbial cytology: the study of microscopic and submicroscopic details of microorganisms
Microbial physiology: the study of how the microbial cell functions biochemically. Includes the study of microbial growth, microbial metabolism and microbial cell structure
Microbial pathogenesis: the study of pathogens which happen to be microbes
Microbial ecology: the relationship between microorganisms and their environment
Microbial genetics: the study of how genes are organized and regulated in microbes in relation to their cellular functions Closely related to the field of molecular biology
Cellular microbiology: a discipline bridging microbiology and cell biology
Evolutionary microbiology: the study of the evolution of microbes. This field can be subdivided into:
Micr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What would you call a relationship where the bacteria benefit and and the other organism is harmed?
A. pathology
B. symbiotic
C. parasitism
D. fungi
Answer:
|
|
sciq-8315
|
multiple_choice
|
Mountain ranges, plateaus, and plains are features of what large landforms?
|
[
"planets",
"States",
"continents",
"Countries"
] |
C
|
Relavent Documents:
Document 0:::
Land cover is the physical material at the surface of Earth. Land covers include grass, asphalt, trees, bare ground, water, etc. Earth cover is the expression used by ecologist Frederick Edward Clements that has its closest modern equivalent being vegetation. The expression continues to be used by the United States Bureau of Land Management.
There are two primary methods for capturing information on land cover: field survey, and analysis of remotely sensed imagery. Land change models can be built from these types of data to assess changes in land cover over time.
One of the major land cover issues (as with all natural resource inventories) is that every survey defines similarly named categories in different ways. For instance, there are many definitions of "forest"—sometimes within the same organisation—that may or may not incorporate a number of different forest features (e.g., stand height, canopy cover, strip width, inclusion of grasses, and rates of growth for timber production). Areas without trees may be classified as forest cover "if the intention is to re-plant" (UK and Ireland), while areas with many trees may not be labelled as forest "if the trees are not growing fast enough" (Norway and Finland).
Distinction from "land use"
"Land cover" is distinct from "land use", despite the two terms often being used interchangeably. Land use is a description of how people utilize the land and of socio-economic activity. Urban and agricultural land uses are two of the most commonly known land use classes. At any one point or place, there may be multiple and alternate land uses, the specification of which may have a political dimension. The origins of the "land cover/land use" couplet and the implications of their confusion are discussed in Fisher et al. (2005).
Types
Following table is Land Cover statistics by Food and Agriculture Organization (FAO) with 14 classes.
Mapping
Land cover change detection using remote sensing and geospatial data provides baselin
Document 1:::
Mountain research or montology, traditionally also known as orology (from Greek oros ὄρος for 'mountain' and logos λόγος), is a field of research that regionally concentrates on the Earth's surface's part covered by mountain environments.
Mountain areas
Different approaches have been developed to define mountainous areas. While some use an altitudinal difference of 300 m inside an area to define that zone as mountainous, others consider differences from 1000 m or more, depending on the areas' latitude. Additionally, some include steepness to define mountain regions, hence excluding high plateaus (e.g. the Andean Altiplano or the Tibetan Plateau), zones often seen to be mountainous. A more pragmatic but useful definition has been proposed by the Italian Statistics Office ISTAT, which classifies municipalities as mountainous
if at least 80% of their territory is situated above ≥ 600 m above sea level, and/or
if they have an altitudinal difference of 600 m (or more) within their administrative boundaries.
The United Nations Environmental Programme has produced a map of mountain areas worldwide using a combination of criteria, including regions with
elevations from 300 to 1000 m and local elevation range > 300 m;
elevations from 1000 to 1500 m and slope ≥ 5° or local elevation range > 300 m;
elevations from 1500 to 2500 m and slope ≥ 2°;
elevations of 2500 m or more.
Focus
Broader definition
In a broader sense, mountain research is considered any research in mountain regions: for instance disciplinary studies on Himalayan plants, Andean rocks, Alpine cities, or Carpathian people. It is comparable to research that concentrates on the Arctic and Antarctic (polar research) or coasts (coastal research).
Narrower definition
In a narrower sense, mountain research focuses on mountain regions, their description and the explanation of the human-environment interaction in (positive) and the sustainable development of (normative) these areas. So-defined mountain rese
Document 2:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 3:::
Vegetation classification is the process of classifying and mapping the vegetation over an area of the earth's surface. Vegetation classification is often performed by state based agencies as part of land use, resource and environmental management. Many different methods of vegetation classification have been used. In general, there has been a shift from structural classification used by forestry for the mapping of timber resources, to floristic community mapping for biodiversity management. Whereas older forestry-based schemes considered factors such as height, species and density of the woody canopy, floristic community mapping shifts the emphasis onto ecological factors such as climate, soil type and floristic associations. Classification mapping is usually now done using geographic information systems (GIS) software.
Classification schemes
Following, some important classification schemes.
Köppen (1884)
Although this scheme is in fact of a climate classification, it has a deep relationship with vegetation studies:
Class A
Tropical rainforest (Af)
Tropical monsoon (Am)
Tropical savanna (Aw, As)
Class B
Desert (BWh, BWk)
Semi-arid (BSh, BSk)
Class C
Humid subtropical (Cfa, Cwa)
Oceanic (Cfb, Cwb, Cfc, Cwc)
Mediterranean (Csa, Csb, Csc)
Class D
Humid continental (Dfa, Dwa, Dfb, Dwb, Dsa, Dsb)
Subarctic (Dfc, Dwc, Dfd, Dwd, Dsc, Dsd)
Class E
Tundra (ET)
Ice cap (EF)
Alpine (ET, EF)
Wagner & von Sydow (1888)
Wagner & von Sydow (1888) scheme: Vegetationsgürtel (vegetation belts):
Tundren (tundra)
Hochgebirgsflora (mountain flora)
Vegetationsarme Gebiete (Wüsten) (vegetation poor areas [deserts])
der gemässigten zone (the temperate zone)
Grasland (prairie)
Vorherrschend Nadelwald (mainly coniferous forest)
Wald (Laub und Nadelwald) und Kulturland (forest [deciduous and coniferous forest] and cultivated land)
in tropischen und subtropischen Gebieten (in tropical and subtropical areas)
Grasland (prairie)
Wald und Kulturland (forest and cul
Document 4:::
There are 62 named Ecological Systems found in Montana These systems are described in the Montana Field Guides-Ecological Systems of Montana.
About
An ecosystem is a biological environment consisting of all the organisms living in a particular area, as well as all the nonliving, physical components of the environment with which the organisms interact, such as air, soil, water and sunlight. It is all the organisms in a given area, along with the nonliving (abiotic) factors with which they interact; a biological community and its physical environment. As stated in an article from Montana State University in their Institute on Ecosystems; "An ecosystem can be small, such as the area under a pine tree or a single hot spring in Yellowstone National Park, or it can be large, such as the Rocky Mountains, the rainforest or the Antarctic Ocean." The Montana Fish, Wildlife and Parks (FWP) have shared their views on Montana's Main Ecosystems as montane forest, intermountain grasslands, plains grasslands and shrub grasslands. The Montana Agricultural Experiment Station (MAES) categorized Montana's ecosystems based on the different rangelands. They have recognized 22 different ecosystems whereas the Montana Natural Heritage Program named 62 ecosystems for the entire state.
Forest and Woodland Systems
Northern Rocky Mountain Mesic Montane Mixed Conifer Forest
Rocky Mountain Subalpine Mesic Spruce-Fir Forest and Woodland
Northwestern Great Plains - Black Hills Ponderosa Pine Woodland and Savanna
Northern Rocky Mountain Dry-Mesic Montane Mixed Conifer Forest
Rocky Mountain Foothill Limber Pine - Juniper Woodland
Northern Rocky Mountain Foothill Conifer Wooded Steppe
Rocky Mountain Lodgepole Pine Forest
Middle Rocky Mountain Montane Douglas-Fir Forest and Woodland
Northern Rocky Mountain Ponderosa Pine Woodland and Savanna
Rocky Mountain Poor Site Lodgepole Pine Forest
Rocky Mountain Subalpine Dry-Mesic Spruce-Fir Forest and Woodland
Northern Rocky Mountain Subalpin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Mountain ranges, plateaus, and plains are features of what large landforms?
A. planets
B. States
C. continents
D. Countries
Answer:
|
|
sciq-7864
|
multiple_choice
|
What begins when an oogonium with the diploid number of chromosomes undergoes mitosis?
|
[
"gametogenesis",
"oogenesis",
"morphogenesis",
"germination"
] |
B
|
Relavent Documents:
Document 0:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 1:::
An immature ovum is a cell that goes through the process of oogenesis to become an ovum. It can be an oogonium, an oocyte, or an ootid. An oocyte, in turn, can be either primary or secondary, depending on how far it has come in its process of meiosis.
Oogonium
Oogonia are the cells that turn into primary oocytes in oogenesis. They are diploid, i.e.
Oogonia are created in early embryonic life. All have turned into primary oocytes at late fetal age.
Primary oocyte
The primary oocyte is defined by its process of ootidogenesis, which is meiosis. It has duplicated its DNA, so that each chromosome has two chromatids, i.e. 92 chromatids all in all (4C).
When meiosis I is completed, one secondary oocyte and one polar body is created.
Primary oocytes have been created in late fetal life. This is the stage where immature ova spend most of their lifetime, more specifically in diplotene of prophase I of meiosis. The halt is called dictyate. Most degenerate by atresia, but a few go through ovulation, and that's the trigger to the next step. Thus, an immature ovum can spend up to ~55 years as a primary oocyte (the last ovulation before menopause).
Secondary oocyte
The secondary oocyte is the cell that is formed by meiosis I in oogenesis. Thus, it has only one of each pair of homologous chromosomes. In other words, it is haploid. However, each chromosome still has two chromatids, making a total of 46 chromatids (1N but 2C). The secondary oocyte continues the second stage of meiosis (meiosis II), and the daughter cells are one ootid and one polar body.
Secondary oocytes are the immature ovum shortly after ovulation, to fertilization, where it turns into an ootid. Thus, the time as a secondary oocyte is measured in days.
Document 2:::
Megagametogenesis is the process of maturation of the female gametophyte, or megagametophyte, in plants During the process of megagametogenesis, the megaspore, which arises from megasporogenesis, develops into the embryo sac, which is where the female gamete is housed. These megaspores then develop into the haploid female gametophytes. This occurs within the ovule, which is housed inside the ovary.
The Process
Prior to megagametogenesis, a developing embryo undergoes meiosis during a process called megasporogenesis. Next, three out of four megaspores disintegrate, leaving only the megaspore that will undergo the megagametogenesis. The following steps are shown in Figure 1, and detailed below.
The remaining megaspore undergoes a round of mitosis. This results in a structure with two nuclei, also called a binucleate embryo sac.
The two nuclei migrate to opposite sides of the embryo sac.
Each haploid nucleus then undergoes two rounds of mitosis which creates 4 haploid nuclei on each end of the embryo sac.
One nucleus from each set of 4 migrates to the center of the embryo sac. These form the binucleate endosperm mother cell. This leaves three remaining nuclei on the micropylar end and three remaining nuclei on the antipodal end. The nuclei on the micropylar end is composed of an egg cell, two synergid cells, and the micropyle, an opening that allows the pollen tube to enter the structure. The nuclei on the antipodal end are simply known as the antipodal cells. These cells are involved with nourishing the embryo, but often undergo programmed cell death before fertilization occurs.
Cell plates form around the antipodal nuclei, egg ell, and synergid cells.
Variations
Plants exhibit three main types of megagametogenesis. The number of haploid nuclei in the functional megaspore that is involved in megagametogenesis is the main difference between these three types.
Monosporic
The most common type of megagametogenesis, monosporic megagametogenesis, is outlined a
Document 3:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 4:::
Alternation of generations (also known as metagenesis or heterogenesis) is the predominant type of life cycle in plants and algae. In plants both phases are multicellular: the haploid sexual phase – the gametophyte – alternates with a diploid asexual phase – the sporophyte.
A mature sporophyte produces haploid spores by meiosis, a process which reduces the number of chromosomes to half, from two sets to one. The resulting haploid spores germinate and grow into multicellular haploid gametophytes. At maturity, a gametophyte produces gametes by mitosis, the normal process of cell division in eukaryotes, which maintains the original number of chromosomes. Two haploid gametes (originating from different organisms of the same species or from the same organism) fuse to produce a diploid zygote, which divides repeatedly by mitosis, developing into a multicellular diploid sporophyte. This cycle, from gametophyte to sporophyte (or equally from sporophyte to gametophyte), is the way in which all land plants and most algae undergo sexual reproduction.
The relationship between the sporophyte and gametophyte phases varies among different groups of plants. In the majority of algae, the sporophyte and gametophyte are separate independent organisms, which may or may not have a similar appearance. In liverworts, mosses and hornworts, the sporophyte is less well developed than the gametophyte and is largely dependent on it. Although moss and hornwort sporophytes can photosynthesise, they require additional photosynthate from the gametophyte to sustain growth and spore development and depend on it for supply of water, mineral nutrients and nitrogen. By contrast, in all modern vascular plants the gametophyte is less well developed than the sporophyte, although their Devonian ancestors had gametophytes and sporophytes of approximately equivalent complexity. In ferns the gametophyte is a small flattened autotrophic prothallus on which the young sporophyte is briefly dependent for its n
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What begins when an oogonium with the diploid number of chromosomes undergoes mitosis?
A. gametogenesis
B. oogenesis
C. morphogenesis
D. germination
Answer:
|
|
sciq-11100
|
multiple_choice
|
Changing the frequency of sound waves, will change the _________ of the sound of a musical instrument?
|
[
"direction",
"velocity",
"pitch",
"distance"
] |
C
|
Relavent Documents:
Document 0:::
Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound), but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling.
A particle of the medium undergoes displacement according to the particle velocity of the sound wave traveling through the medium, while the sound wave itself moves at the speed of sound, equal to in air at .
Mathematical definition
Particle displacement, denoted δ, is given by
where v is the particle velocity.
Progressive sine waves
The particle displacement of a progressive sine wave is given by
where
is the amplitude of the particle displacement;
is the phase shift of the particle displacement;
is the angular wavevector;
is the angular frequency.
It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by
where
is the amplitude of the particle velocity;
is the phase shift of the particle velocity;
is the amplitude of the acoustic pressure;
is the phase shift of the acoustic pressure.
Taking the Laplace transforms of v and p with respect to time yields
Since , the amplitude of the specific acoustic impedance is given by
Consequently, the amplitude of the particle displacement is related to those of the particle velocity and the sound pressure by
See also
Sound
Sound particle
Particle velocity
Particle acceleration
Document 1:::
Whenever a wave forms through a medium/object (organ pipe) with a closed/open end, there is a chance of error in the formation of the wave, i.e. it may not actually start from the opening of the object but instead before the opening, thus resulting on an error when studying it theoretically. Hence an end correction is sometimes required to appropriately study its properties. The end correction depends on the radius of the object.
An acoustic pipe, such as an organ pipe, marimba, or flute resonates at a specific pitch or frequency. Longer pipes resonate at lower frequencies, producing lower-pitched sounds. The details of acoustic resonance are taught in many elementary physics classes. In an ideal tube, the wavelength of the sound produced is directly proportional to the length of the tube. A tube which is open at one end and closed at the other produces sound with a wavelength equal to four times the length of the tube. A tube which is open at both ends produces sound whose wavelength is just twice the length of the tube. Thus, when a Boomwhacker with two open ends is capped at one end, the pitch produced by the tube goes down by one octave.
The analysis above applies only to an ideal tube, of zero diameter. When designing an organ or Boomwhacker, the diameter of the tube must be taken into account. In acoustics, end correction is a short distance applied or added to the actual length of a resonance pipe, in order to calculate the precise resonant frequency of the pipe. The pitch of a real tube is lower than the pitch predicted by the simple theory. A finite diameter pipe appears to be acoustically somewhat longer than its physical length.
A theoretical basis for computation of the end correction is the radiation acoustic impedance of a circular piston. This impedance represents the ratio of acoustic pressure at the piston, divided by the flow rate induced by it. The air speed is typically assumed to be uniform across the tube end. This is a good approximation,
Document 2:::
A sine wave, sinusoidal wave, or sinusoid (symbol: ∿) is a periodic wave whose waveform (shape) is the trigonometric sine function.
In mechanics, as a linear motion over time, this is simple harmonic motion; as rotation, it corresponds to uniform circular motion.
Sine waves occur often in physics, including wind waves, sound waves, and light waves.
In engineering, signal processing, and mathematics, Fourier analysis decomposes general functions into a sum of sine waves of various frequencies.
When any two sine waves of the same frequency (but arbitrary phase) are linearly combined, the result is another sine wave of the same frequency; this property is unique among periodic waves. Conversely, if some phase is chosen as a zero reference, a sine wave of arbitrary phase can be written as the linear combination of two sine waves with phases of zero and a quarter cycle, the sine and cosine components, respectively. (In this context it can be helpful to call waves of arbitrary phase sinusoids, to avoid confusion.)
Audio example
A sine wave represents a single frequency with no harmonics and is considered an acoustically pure tone. Adding sine waves of different frequencies results in a different waveform. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, which is the reason why the same musical pitch played on different instruments sounds different.
Sine wave as a function of time
Sine waves that are only a function of time can be represented by the form:
where:
, amplitude, the peak deviation of the function from zero.
, ordinary frequency, the number of oscillations (cycles) that occur each second of time.
, angular frequency, the rate of change of the function argument in units of radians per second.
, phase, specifies (in radians) where in its cycle the oscillation is at t = 0.
When is non-zero, the entire waveform appears to be shifted backwards in time by the amount seconds. A negative value represents a delay
Document 3:::
Musical acoustics or music acoustics is a multidisciplinary field that combines knowledge from physics, psychophysics, organology (classification of the instruments), physiology, music theory, ethnomusicology, signal processing and instrument building, among other disciplines. As a branch of acoustics, it is concerned with researching and describing the physics of music – how sounds are employed to make music. Examples of areas of study are the function of musical instruments, the human voice (the physics of speech and singing), computer analysis of melody, and in the clinical use of music in music therapy.
The pioneer of music acoustics was Hermann von Helmholtz, a German polymath of the 19th century who was an influential physician, physicist, physiologist, musician, mathematician and philosopher. His book On the Sensations of Tone as a Physiological Basis for the Theory of Music is a revolutionary compendium of several studies and approaches that provided a complete new perspective to music theory, musical performance, music psychology and the physical behaviour of musical instruments.
Methods and fields of study
The physics of musical instruments
Frequency range of music
Fourier analysis
Computer analysis of musical structure
Synthesis of musical sounds
Music cognition, based on physics (also known as psychoacoustics)
Physical aspects
Whenever two different pitches are played at the same time, their sound waves interact with each other – the highs and lows in the air pressure reinforce each other to produce a different sound wave. Any repeating sound wave that is not a sine wave can be modeled by many different sine waves of the appropriate frequencies and amplitudes (a frequency spectrum). In humans the hearing apparatus (composed of the ears and brain) can usually isolate these tones and hear them distinctly. When two or more tones are played at once, a variation of air pressure at the ear "contains" the pitches of each, and the ear and/or brain isolat
Document 4:::
In a compressible sound transmission medium - mainly air - air particles get an accelerated motion: the particle acceleration or sound acceleration with the symbol a in metre/second2. In acoustics or physics, acceleration (symbol: a) is defined as the rate of change (or time derivative) of velocity. It is thus a vector quantity with dimension length/time2. In SI units, this is m/s2.
To accelerate an object (air particle) is to change its velocity over a period. Acceleration is defined technically as "the rate of change of velocity of an object with respect to time" and is given by the equation
where
a is the acceleration vector
v is the velocity vector expressed in m/s
t is time expressed in seconds.
This equation gives a the units of m/(s·s), or m/s2 (read as "metres per second per second", or "metres per second squared").
An alternative equation is:
where
is the average acceleration (m/s2)
is the initial velocity (m/s)
is the final velocity (m/s)
is the time interval (s)
Transverse acceleration (perpendicular to velocity) causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have
One common unit of acceleration is g-force, one g being the acceleration caused by the gravity of Earth.
In classical mechanics, acceleration is related to force and mass (assumed to be constant) by way of Newton's second law:
Equations in terms of other measurements
The Particle acceleration of the air particles a in m/s2 of a plain sound wave is:
See also
Sound
Sound particle
Particle displacement
Particle velocity
External links
Relationships of acoustic quantities associated with a plane progressive acoustic sound wave - pdf
Acoustics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Changing the frequency of sound waves, will change the _________ of the sound of a musical instrument?
A. direction
B. velocity
C. pitch
D. distance
Answer:
|
|
sciq-9895
|
multiple_choice
|
What "apparatus" is responsible for sorting, modifying, and shipping off the products that come from the rough endoplasmic reticulum?
|
[
"receptor apparatus",
"golgi apparatus",
"plasma apparatus",
"secretion apparatus"
] |
B
|
Relavent Documents:
Document 0:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 1:::
Paramural bodies are membranous or vesicular structures located between the cell walls and cell membranes of plant and fungal cells. When these are continuous with the cell wall, they are termed lomasomes, while they are referred to as plasmalemmasomes if associated with the plasmalemma.
Function
While their function has not yet been studied in great detail, it has been speculated that due to the morphological similarity of paramural bodies to the exosomes produced by mammalian cells, they may perform similar functions such as membrane vesicle trafficking between cells. Current evidence suggests that, like exosomes, paramural bodies are derived from multivesicular bodies.
See also
Exosome
Endosome
Golgi apparatus
Document 2:::
The endoplasmic reticulum membrane protein complex (EMC) is a putative endoplasmic reticulum-resident membrane protein (co-)chaperone. The EMC is evolutionarily conserved in eukaryotes (animals, plants, and fungi), and its initial appearance might reach back to the last eukaryotic common ancestor (LECA). Many aspects of mEMC biology and molecular function remain to be studied.
Composition and structure
The EMC consists of up to 10 subunits (EMC1 - EMC4, MMGT1, EMC6 - EMC10), of which only two (EMC8/9) are homologous proteins. Seven out of ten (EMC1, EMC3, EMC4, MMMGT1, EMC6, EMC7, EMC10) subunts are predicted to contain at least one transmembrane domain (TMD), whereas EMC2, EMC8 and EMC9 do not contain any predicted transmembrane domains are herefore likely to interact with the rest of the EMC on the cytosolic face of the endoplasmic reticulum (ER). EMC proteins are thought to be present in the mature complex in a 1:1 stoichiometry.
Subunit primary structure
The majority of EMC proteins (EMC1/3/4/MMGT1/6/7/10) contain at least one predicted TMD. EMC1, EMC7 and EMC10 contain an N-terminal signal sequence.
EMC1
EMC1, also known as KIAA0090, contains a single TMD (aa 959-979) and Pyrroloquinoline quinone (PQQ)-like repeats (aa 21-252), which could form a β-propeller domain. The TMD is part of a domain a larger domain (DUF1620). The functions of the PQQ and DUF1620 domains in EMC1 remain to be determined.
EMC2
EMC2 (TTC35) harbours three tetratricopeptide repeats (TPR1/2/3). TPRs have been shown to mediate protein-protein interactions and can be found in a large variety of proteins of diverse function. The function of TPRs in EMC2 is unknown.
EMC8 and EMC9
Document 3:::
Endoplasm generally refers to the inner (often granulated), dense part of a cell's cytoplasm. This is opposed to the ectoplasm which is the outer (non-granulated) layer of the cytoplasm, which is typically watery and immediately adjacent to the plasma membrane. The nucleus is separated from the endoplasm by the nuclear envelope. The different makeups/viscosities of the endoplasm and ectoplasm contribute to the amoeba's locomotion through the formation of a pseudopod. However, other types of cells have cytoplasm divided into endo- and ectoplasm. The endoplasm, along with its granules, contains water, nucleic acids, amino acids, carbohydrates, inorganic ions, lipids, enzymes, and other molecular compounds. It is the site of most cellular processes as it houses the organelles that make up the endomembrane system, as well as those that stand alone. The endoplasm is necessary for most metabolic activities, including cell division.
The endoplasm, like the cytoplasm, is far from static. It is in a constant state of flux through intracellular transport, as vesicles are shuttled between organelles and to/from the plasma membrane. Materials are regularly both degraded and synthesized within the endoplasm based on the needs of the cell and/or organism. Some components of the cytoskeleton run throughout the endoplasm though most are concentrated in the ectoplasm - towards the cells edges, closer to the plasma membrane. The endoplasm's granules are suspended in cytosol.
Granules
The term granule refers to a small particle within the endoplasm, typically the secretory vesicles. The granule is the defining characteristic of the endoplasm, as they are typically not present within the ectoplasm. These offshoots of the endomembrane system are enclosed by a phospholipid bilayer and can fuse with other organelles as well as the plasma membrane. Their membrane is only semipermeable and allows them to house substances that could be harmful to the cell if they were allowed to flow fre
Document 4:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What "apparatus" is responsible for sorting, modifying, and shipping off the products that come from the rough endoplasmic reticulum?
A. receptor apparatus
B. golgi apparatus
C. plasma apparatus
D. secretion apparatus
Answer:
|
|
sciq-7858
|
multiple_choice
|
What uses oxygen gas to break apart the carbon-hydrogen bonds in glucose and release their energy?
|
[
"classical respiration",
"energetic respiration",
"cellular respiration",
"electromagnetic respiration"
] |
C
|
Relavent Documents:
Document 0:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 1:::
Digestion is the breakdown of carbohydrates to yield an energy-rich compound called ATP. The production of ATP is achieved through the oxidation of glucose molecules. In oxidation, the electrons are stripped from a glucose molecule to reduce NAD+ and FAD. NAD+ and FAD possess a high energy potential to drive the production of ATP in the electron transport chain. ATP production occurs in the mitochondria of the cell. There are two methods of producing ATP: aerobic and anaerobic.
In aerobic respiration, oxygen is required. Using oxygen increases ATP production from 4 ATP molecules to about 30 ATP molecules.
In anaerobic respiration, oxygen is not required. When oxygen is absent, the generation of ATP continues through fermentation. There are two types of fermentation: alcohol fermentation and lactic acid fermentation.
There are several different types of carbohydrates: polysaccharides (e.g., starch, amylopectin, glycogen, cellulose), monosaccharides (e.g., glucose, galactose, fructose, ribose) and the disaccharides (e.g., sucrose, maltose, lactose).
Glucose reacts with oxygen in the following reaction, C6H12O6 + 6O2 → 6CO2 + 6H2O. Carbon dioxide and water are waste products, and the overall reaction is exothermic.
The reaction of glucose with oxygen releasing energy in the form of molecules of ATP is therefore one of the most important biochemical pathways found in living organisms.
Glycolysis
Glycolysis, which means “sugar splitting,” is the initial process in the cellular respiration pathway. Glycolysis can be either an aerobic or anaerobic process. When oxygen is present, glycolysis continues along the aerobic respiration pathway. If oxygen is not present, then ATP production is restricted to anaerobic respiration. The location where glycolysis, aerobic or anaerobic, occurs is in the cytosol of the cell. In glycolysis, a six-carbon glucose molecule is split into two three-carbon molecules called pyruvate. These carbon molecules are oxidized into NADH and AT
Document 2:::
Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce.
Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising.
Biochemical process of fermentation of sucrose
The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process.
C6H12O6 → 2 C2H5OH + 2 CO2
Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules.
C12H22O11 + H2O + invertase → 2 C6H12O6
Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation:
C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+
CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis:
1. CH3COCOO− + H+ → CH3CHO + CO2
catalyzed by pyruvate decarboxylase
2. CH3CHO + NADH + H+ → C2H5OH + NAD+
This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast).
Document 3:::
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration.
Each pathway generates different waste products.
Aerobic respiration
When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules.
Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP
Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen.
In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant.
Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle.
The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted.
Anaerobic respiration
Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that
Document 4:::
The term amphibolic () is used to describe a biochemical pathway that involves both catabolism and anabolism. Catabolism is a degradative phase of metabolism in which large molecules are converted into smaller and simpler molecules, which involves two types of reactions. First, hydrolysis reactions, in which catabolism is the breaking apart of molecules into smaller molecules to release energy. Examples of catabolic reactions are digestion and cellular respiration, where sugars and fats are broken down for energy. Breaking down a protein into amino acids, or a triglyceride into fatty acids, or a disaccharide into monosaccharides are all hydrolysis or catabolic reactions. Second, oxidation reactions involve the removal of hydrogens and electrons from an organic molecule. Anabolism is the biosynthesis phase of metabolism in which smaller simple precursors are converted to large and complex molecules of the cell. Anabolism has two classes of reactions. The first are dehydration synthesis reactions; these involve the joining of smaller molecules together to form larger, more complex molecules. These include the formation of carbohydrates, proteins, lipids and nucleic acids. The second are reduction reactions, in which hydrogens and electrons are added to a molecule. Whenever that is done, molecules gain energy.
The term amphibolic was proposed by B. Davis in 1961 to emphasise the dual metabolic role of such pathways. These pathways are considered to be central metabolic pathways which provide, from catabolic sequences, the intermediates which form the substrate of the metabolic processes.
Reactions exist as amphibolic pathway
All the reactions associated with synthesis of biomolecule converge into the following pathway, viz., glycolysis, the Krebs cycle and the electron transport chain, exist as an amphibolic pathway, meaning that they can function anabolically as well as catabolically.
Other important amphibolic pathways are the Embden-Meyerhof pathway, the pentos
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What uses oxygen gas to break apart the carbon-hydrogen bonds in glucose and release their energy?
A. classical respiration
B. energetic respiration
C. cellular respiration
D. electromagnetic respiration
Answer:
|
|
sciq-1148
|
multiple_choice
|
During a what type of reaction do chemical changes take place?
|
[
"chemical",
"nuclear",
"biological",
"toxic"
] |
A
|
Relavent Documents:
Document 0:::
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules.
It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance.
It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics.
See also
Physical chemistry
Document 1:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Applications
Science
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis:
A) Qualitative Analysis: It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with i
Document 4:::
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
During a what type of reaction do chemical changes take place?
A. chemical
B. nuclear
C. biological
D. toxic
Answer:
|
|
sciq-3426
|
multiple_choice
|
Objects in motion that return to the same position after a fixed period of time are said to be in what type of motion?
|
[
"curving",
"harmonic",
"circular",
"dynamic"
] |
B
|
Relavent Documents:
Document 0:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 1:::
In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation. In circular motion, the distance between the body and a fixed point on its surface remains the same, i.e., the body is assumed rigid.
Examples of circular motion include: special satellite orbits around the Earth (circular orbits), a ceiling fan's blades rotating around a hub, a stone that is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism.
Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion.
Uniform circular motion
In physics, uniform circular motion describes the motion of a body traversing a circular path at a constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times toward the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed toward the axis of
Document 2:::
The motion of an object moving in a curved path is called curvilinear motion.
Example: A stone thrown into the air at an angle'''.
Curvilinear motion describes the motion of a moving particle that conforms to a known or fixed curve. The study of such motion involves the use of two co-ordinate systems, the first being planar motion and the latter being cylindrical motion.
Planar motion
In planar motion, the velocity and acceleration components of the particle are always tangential and normal to the fixed curve. The velocity is always tangential to the curve and the acceleration can be broken up into both a tangential and normal component.
Cylindrical components
With cylindrical co-ordinates which are described as î and j, the motion is best described in polar form with components that resemble polar vectors. As with planar motion, the velocity is always tangential to the curve, but in this form acceleration consist of different intermediate components that can now run along the radius and its normal vector. This type of co-ordinate system is best used when the motion is restricted to the plane upon which it travels.
Document 3:::
In physics, a number of noted theories of the motion of objects have developed. Among the best known are:
Classical mechanics
Newton's laws of motion
Euler's laws of motion
Cauchy's equations of motion
Kepler's laws of planetary motion
General relativity
Special relativity
Quantum mechanics
Motion (physics)
Document 4:::
Dynamics is the branch of classical mechanics that is concerned with the study of forces and their effects on motion. Isaac Newton was the first to formulate the fundamental physical laws that govern dynamics in classical non-relativistic physics, especially his second law of motion.
Principles
Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes. In addition, Newton established the fundamental physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular, dynamics is mostly related to Newton's second law of motion. However, all three laws of motion are taken into account because these are interrelated in any given observation or experiment.
Linear and rotational dynamics
The study of dynamics falls under two categories: linear and rotational. Linear dynamics pertains to objects moving in a line and involves such quantities as force, mass/inertia, displacement (in units of distance), velocity (distance per unit time), acceleration (distance per unit of time squared) and momentum (mass times unit of velocity). Rotational dynamics pertains to objects that are rotating or moving in a curved path and involves such quantities as torque, moment of inertia/rotational inertia, angular displacement (in radians or less often, degrees), angular velocity (radians per unit time), angular acceleration (radians per unit of time squared) and angular momentum (moment of inertia times unit of angular velocity). Very often, objects exhibit linear and rotational motion.
For classical electromagnetism, Maxwell's equations describe the kinematics. The dynamics of classical systems involving both mechanics and electromagnetism are described by the combination of Newton's laws, Maxwell's equations, and the Lorentz force.
Force
From Newton, force can be defined as an exertion or pressure which can cause an object to ac
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Objects in motion that return to the same position after a fixed period of time are said to be in what type of motion?
A. curving
B. harmonic
C. circular
D. dynamic
Answer:
|
|
sciq-9705
|
multiple_choice
|
Which gases trap heat in the atmosphere?
|
[
"fluorine and nitrogen",
"greenhouse",
"methane and helium",
"ozone"
] |
B
|
Relavent Documents:
Document 0:::
In atmospheric science, equivalent temperature is the temperature of air in a parcel from which all the water vapor has been extracted by an adiabatic process.
Air contains water vapor that has been evaporated into it from liquid sources (lakes, sea, etc...). The energy needed to do that has been taken from the air. Taking a volume of air at temperature and mixing ratio of , drying it by condensation will restore energy to the airmass. This will depend on the latent heat release as:
where:
: latent heat of evaporation (2400 kJ/kg at 25°C to 2600 kJ/kg at −40°C)
: specific heat at constant pressure for air (≈ 1004 J/(kg·K))
Tables exist for exact values of the last two coefficients.
See also
Wet-bulb temperature
Potential temperature
Atmospheric thermodynamics
Equivalent potential temperature
Bibliography
M Robitzsch, Aequivalenttemperatur und Aequivalentthemometer, Meteorologische Zeitschrift, 1928, pp. 313-315.
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
J.V. Iribarne and W.L. Godson, Atmospheric Thermodynamics, published by D. Reidel Publishing Company, Dordrecht, Holland, 1973, 222 pages
Atmospheric thermodynamics
Atmospheric temperature
Meteorological quantities
Document 1:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Document 2:::
The temperatures of a planet's surface and atmosphere are governed by a delicate balancing of their energy flows. The idealized greenhouse model is based on the fact that certain gases in the Earth's atmosphere, including carbon dioxide and water vapour, are transparent to the high-frequency solar radiation, but are much more opaque to the lower frequency infrared radiation leaving Earth's surface. Thus heat is easily let in, but is partially trapped by these gases as it tries to leave. Rather than get hotter and hotter, Kirchhoff's law of thermal radiation says that the gases of the atmosphere also have to re-emit the infrared energy that they absorb, and they do so, also at long infrared wavelengths, both upwards into space as well as downwards back towards the Earth's surface. In the long-term, the planet's thermal inertia is surmounted and a new thermal equilibrium is reached when all energy arriving on the planet is leaving again at the same rate. In this steady-state model, the greenhouse gases cause the surface of the planet to be warmer than it would be without them, in order for a balanced amount of heat energy to finally be radiated out into space from the top of the atmosphere.
Essential features of this model where first published by Svante Arrhenius in 1896. It has since become a common introductory "textbook model" of the radiative heat transfer physics underlying Earth's energy balance and the greenhouse effect. The planet is idealized by the model as being functionally "layered" with regard to a sequence of simplified energy flows, but dimensionless (i.e. a zero-dimensional model) in terms of its mathematical space. The layers include a surface with constant temperature Ts and an atmospheric layer with constant temperature Ta. For diagrammatic clarity, a gap can be depicted between the atmosphere and the surface. Alternatively, Ts could be interpreted as a temperature representative of the surface and the lower atmosphere, and Ta could be inter
Document 3:::
Endothermic gas is a gas that inhibits or reverses oxidation on the surfaces it is in contact with. This gas is the product of incomplete combustion in a controlled environment. An example mixture is hydrogen gas (H2), nitrogen gas (N2), and carbon monoxide (CO). The hydrogen and carbon monoxide are reducing agents, so they work together to shield surfaces from oxidation.
Endothermic gas is often used as a carrier gas for gas carburizing and carbonitriding. An endothermic gas generator could be used to supply heat to form an endothermic reaction.
Synthesised in the catalytic retort(s) of endothermic generators, the gas in the endothermic atmosphere is combined with an additive gas including natural gas, propane (C3H8) or air and is then used to improve the surface chemistry work positioned in the furnace.
Purposes
There are two common purposes of the atmospheres in the heat treating industry:
Protect the processed material from surface reactions (chemically inert)
Allow surface of processed material to change (chemically reactive)
Principal components of a endothermic gas generator
Principal components of endothermic gas generators:
Heating chamber for supplying heat by electric heating elements of combustion,
Vertical cylindrical retorts,
Tiny, porous ceramic pieces that are saturated with nickel, which acts as a catalyst for the reaction,
Cooling heat exchanger in order to cool the products of the reaction as quickly as possible so that it reaches a particular temperature which stops any further reaction,
Control system which will help maintain the consistency of the temperature of the reaction which will help adjust the gas ratio, providing the wanted dew point.
Chemical composition
Chemistry of endothermic gas generators:
N2 (nitrogen) → 45.1% (volume)
CO (carbon monoxide) → 19.6% (volume)
CO2 (carbon dioxide) → 0.4% (volume)
H2 (hydrogen) → 34.6% (volume)
CH4 (methane) → 0.3% (volume)
Dew point → +20/+50
Gas ratio → 2.6:1
Applications
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which gases trap heat in the atmosphere?
A. fluorine and nitrogen
B. greenhouse
C. methane and helium
D. ozone
Answer:
|
|
sciq-6710
|
multiple_choice
|
What is released during an enthalpy reaction?
|
[
"heat",
"sound",
"gold",
"precipitation"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.
The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities:
energy/mole of fuel
energy/mass of fuel
energy/volume of the fuel
There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like are allowed to condense.
The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation ΔH of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion).
By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, ΔH°comb, is the heat of reaction of the following process:
(std.) + (c + - ) (g) → c (g) + (l) + (g)
Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and or gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water.
Ways of determination
Gross and net
Z
Document 4:::
In thermodynamics, enthalpy–entropy compensation is a specific example of the compensation effect. The compensation effect refers to the behavior of a series of closely related chemical reactions (e.g., reactants in different solvents or reactants differing only in a single substituent), which exhibit a linear relationship between one of the following kinetic or thermodynamic parameters for describing the reactions:
Between the logarithm of the pre-exponential factors (or prefactors) and the activation energies where the series of closely related reactions are indicated by the index , are the preexponential factors, are the activation energies, is the gas constant, and are constants.
Between enthalpies and entropies of activation (enthalpy–entropy compensation) where are the enthalpies of activation and are the entropies of activation.
Between the enthalpy and entropy changes of a series of similar reactions (enthalpy–entropy compensation) where are the enthalpy changes and are the entropy changes.
When the activation energy is varied in the first instance, we may observe a related change in pre-exponential factors. An increase in tends to compensate for an increase in , which is why we call this phenomenon a compensation effect. Similarly, for the second and third instances, in accordance with the Gibbs free energy equation, with which we derive the listed equations, scales proportionately with . The enthalpy and entropy compensate for each other because of their opposite algebraic signs in the Gibbs equation.
A correlation between enthalpy and entropy has been observed for a wide variety of reactions. The correlation is significant because, for linear free-energy relationships (LFERs) to hold, one of three conditions for the relationship between enthalpy and entropy for a series of reactions must be met, with the most common encountered scenario being that which describes enthalpy–entropy compensation. The empirical relations above were noticed by
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is released during an enthalpy reaction?
A. heat
B. sound
C. gold
D. precipitation
Answer:
|
|
sciq-4335
|
multiple_choice
|
What kind of muscle is responsible for making the human heart beat?
|
[
"deltoid",
"teres minor",
"respiratory muscle",
"cardiac muscle"
] |
D
|
Relavent Documents:
Document 0:::
Cardiac muscle (also called heart muscle or myocardium) is one of three types of vertebrate muscle tissues, with the other two being skeletal muscle and smooth muscle. It is an involuntary, striated muscle that constitutes the main tissue of the wall of the heart. The cardiac muscle (myocardium) forms a thick middle layer between the outer layer of the heart wall (the pericardium) and the inner layer (the endocardium), with blood supplied via the coronary circulation. It is composed of individual cardiac muscle cells joined by intercalated discs, and encased by collagen fibers and other substances that form the extracellular matrix.
Cardiac muscle contracts in a similar manner to skeletal muscle, although with some important differences. Electrical stimulation in the form of a cardiac action potential triggers the release of calcium from the cell's internal calcium store, the sarcoplasmic reticulum. The rise in calcium causes the cell's myofilaments to slide past each other in a process called excitation-contraction coupling.
Diseases of the heart muscle known as cardiomyopathies are of major importance. These include ischemic conditions caused by a restricted blood supply to the muscle such as angina, and myocardial infarction.
Structure
Gross anatomy
Cardiac muscle tissue or myocardium forms the bulk of the heart. The heart wall is a three-layered structure with a thick layer of myocardium sandwiched between the inner endocardium and the outer epicardium (also known as the visceral pericardium). The inner endocardium lines the cardiac chambers, covers the cardiac valves, and joins with the endothelium that lines the blood vessels that connect to the heart. On the outer aspect of the myocardium is the epicardium which forms part of the pericardial sac that surrounds, protects, and lubricates the heart.
Within the myocardium, there are several sheets of cardiac muscle cells or cardiomyocytes. The sheets of muscle that wrap around the left ventricle clos
Document 1:::
Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning.
Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004.
One can use interchangeably also the terms cardiovascular physics.
See also
Medical physics
Important publications in medical physics
Biomedicine
Biomedical engineering
Physiome
Nanomedicine
Document 2:::
The Cardiac Electrophysiology Society (CES) is an international society of basic and clinical scientists and physicians interested in cardiac electrophysiology and arrhythmias. The Cardiac Electrophysiology Society's founder was George Burch in 1949 and its current president is Jonathan C. Makielski, M.D.
Document 3:::
The Frank–Starling law of the heart (also known as Starling's law and the Frank–Starling mechanism) represents the relationship between stroke volume and end diastolic volume. The law states that the stroke volume of the heart increases in response to an increase in the volume of blood in the ventricles, before contraction (the end diastolic volume), when all other factors remain constant. As a larger volume of blood flows into the ventricle, the blood stretches cardiac muscle, leading to an increase in the force of contraction. The Frank-Starling mechanism allows the cardiac output to be synchronized with the venous return, arterial blood supply and humoral length, without depending upon external regulation to make alterations. The physiological importance of the mechanism lies mainly in maintaining left and right ventricular output equality.
Physiology
The Frank-Starling mechanism occurs as the result of the length-tension relationship observed in striated muscle, including for example skeletal muscles, arthropod muscle and cardiac (heart) muscle. As striated muscle is stretched, active tension is created by altering the overlap of thick and thin filaments. The greatest isometric active tension is developed when a muscle is at its optimal length. In most relaxed skeletal muscle fibers, passive elastic properties maintain the muscle fibers length near optimal, as determined usually by the fixed distance between the attachment points of tendons to the bones (or the exoskeleton of arthropods) at either end of the muscle. In contrast, the relaxed sarcomere length of cardiac muscle cells, in a resting ventricle, is lower than the optimal length for contraction. There is no bone to fix sarcomere length in the heart (of any animal) so sarcomere length is very variable and depends directly upon blood filling and thereby expanding the heart chambers. In the human heart, maximal force is generated with an initial sarcomere length of 2.2 micrometers, a length which is rare
Document 4:::
Magnetocardiography (MCG) is a technique to measure the magnetic fields produced by electrical currents in the heart using extremely sensitive devices such as the superconducting quantum interference device (SQUID). If the magnetic field is measured using a multichannel device, a map of the magnetic field is obtained over the chest; from such a map, using mathematical algorithms that take into account the conductivity structure of the torso, it is possible to locate the source of the activity. For example, sources of abnormal rhythms or arrhythmia may be located using MCG.
History
The first MCG measurements were made by Baule and McFee using two large coils placed over the chest, connected in opposition to cancel out the relatively large magnetic background. Heart signals were indeed seen, but were very noisy. The next development was by David Cohen, who used a magnetically shielded room to reduce the background, and a smaller coil with better electronics; the heart signals were now less noisy, allowing a magnetic map to be made, verifying the magnetic properties and source of the signal. However, the use of an inherently noisy coil detector discouraged widespread interest in the MCG. The turning point came with the development of the sensitive detector called the SQUID (superconducting quantum interference device) by James Zimmerman. The combination of this detector and Cohen's new shielded room at MIT allowed the MCG signal to be seen as clearly as the conventional electrocardiogram, and the publication of this result by Cohen et al. marked the real beginning of magnetocardiography (as well as biomagnetism generally).
Magnetocardiography is used in various laboratories and clinics around the world, both for research on the normal human heart, and for clinical diagnosis.
Clinical implementation
MCG technology has been implemented in hospitals in Germany. The MCG system, CS MAG II of Biomagnetik Park GmbH, was installed at Coburg Hospital in 2013. The CS-MAG III
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of muscle is responsible for making the human heart beat?
A. deltoid
B. teres minor
C. respiratory muscle
D. cardiac muscle
Answer:
|
|
ai2_arc-18
|
multiple_choice
|
A toothpaste commercial states that a brand of toothpaste has a higher concentration of fluoride than any other toothpaste available. The commercial is most likely inferring that the advertised toothpaste
|
[
"has a pleasant flavor.",
"is recommended by dentists.",
"promotes good dental hygiene.",
"is the most expensive brand sold."
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A toothpaste commercial states that a brand of toothpaste has a higher concentration of fluoride than any other toothpaste available. The commercial is most likely inferring that the advertised toothpaste
A. has a pleasant flavor.
B. is recommended by dentists.
C. promotes good dental hygiene.
D. is the most expensive brand sold.
Answer:
|
|
sciq-6659
|
multiple_choice
|
What are long living plasma cells called?
|
[
"memory cells",
"brain Cells",
"device cells",
"context cells"
] |
A
|
Relavent Documents:
Document 0:::
Permanent cells are cells that are incapable of regeneration. These cells are considered to be terminally differentiated and non-proliferative in postnatal life. This includes neurons, heart cells, skeletal muscle cells and red blood cells. Although these cells are considered permanent in that they neither reproduce nor transform into other cells, this does not mean that the body cannot create new versions of these cells. For instance, structures in the bone marrow produce new red blood cells constantly, while skeletal muscle damage can be repaired by underlying satellite cells, which fuse to become a new skeletal muscle cell.
Disease and virology studies can use permanent cells to maintain cell count and accurately quantify the effects of vaccines. Some embryology studies also use permanent cells to avoid harvesting embryonic cells from pregnant animals; since the cells are permanent, they may be harvested at a later age when an animal is fully developed.
See also
Labile cells, which multiply constantly throughout life
Stable cells, which only multiply when receiving external stimulus to do so
Document 1:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 2:::
Stem cell markers are genes and their protein products used by scientists to isolate and identify stem cells. Stem cells can also be identified by functional assays. Below is a list of genes/protein products that can be used to identify various types of stem cells, or functional assays that do the same. The initial version of the list below was obtained by mining the PubMed database as described in
Stem cell marker names
Document 3:::
Hematopoietic stem cells (HSCs) have high regenerative potentials and are capable of differentiating into all blood and immune system cells. Despite this impressive potential, HSCs have limited potential to produce more multipotent stem cells. This limited self-renewal potential is protected through maintenance of a quiescent state in HSCs. Stem cells maintained in this quiescent state are known as long term HSCs (LT-HSCs). During quiescence, HSCs maintain a low level of metabolic activity and do not divide. LT-HSCs can be signaled to proliferate, producing either myeloid or lymphoid progenitors. Production of these progenitors does not come without a cost: When grown under laboratory conditions that induce proliferation, HSCs lose their ability to divide and produce new progenitors. Therefore, understanding the pathways that maintain proliferative or quiescent states in HSCs could reveal novel pathways to improve existing therapeutics involving HSCs.
Background
All adult stem cells can undergo two types of division: symmetric and asymmetric. When a cell undergoes symmetric division, it can either produce two differentiated cells or two new stem cells. When a cell undergoes asymmetric division, it produces one stem and one differentiated cell. Production of new stem cells is necessary to maintain this population within the body. Like all cells, hematopoietic stem cells undergo metabolic shifts to meet their bioenergetic needs throughout development. These metabolic shifts play an important role in signaling, generating biomass, and protecting the cell from damage. Metabolic shifts also guide development in HSCs and are one key factor in determining if an HSC will remain quiescent, symmetrically divide, or asymmetrically divide. As mentioned above, quiescent cells maintain a low level of oxidative phosphorylation and primarily rely on glycolysis to generate energy. Fatty acid beta-oxidation has been shown to influence fate decisions in HSCs. In contrast, proliferat
Document 4:::
Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are long living plasma cells called?
A. memory cells
B. brain Cells
C. device cells
D. context cells
Answer:
|
|
sciq-11177
|
multiple_choice
|
What is the outer layer of the adrenal gland called?
|
[
"zona reticularis",
"adrenal skin",
"adrenal cortex",
"medulla"
] |
C
|
Relavent Documents:
Document 0:::
A central or intermediate group of three or four large glands is imbedded in the adipose tissue near the base of the axilla.
Its afferent lymphatic vessels are the efferent vessels of all the preceding groups of axillary glands; its efferents pass to the subclavicular group.
Additional images
Document 1:::
The posterior surfaces of the ciliary processes are covered by a bilaminar layer of black pigment cells, which is continued forward from the retina, and is named the pars ciliaris retinae.
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
The outer nuclear layer (or layer of outer granules or external nuclear layer), is one of the layers of the vertebrate retina, the light-detecting portion of the eye. Like the inner nuclear layer, the outer nuclear layer contains several strata of oval nuclear bodies; they are of two kinds, viz.: rod and cone granules, so named on account of their being respectively connected with the rods and cones of the next layer.
Rod granules
The spherical rod granules are much more numerous, and are placed at different levels throughout the layer.
Their nuclei present a peculiar cross-striped appearance, and prolonged from either extremity of each cell is a fine process; the outer process is continuous with a single rod of the layer of rods and cones; the inner ends in the outer plexiform layer in an enlarged extremity, and is imbedded in the tuft into which the outer processes of the rod bipolar cells break up.
In its course it presents numerous varicosities.
Cone granules
The stem-like cone granules, fewer in number than the rod granules, are placed close to the membrana limitans externa, through which they are continuous with the cones of the layer of rods and cones.
They do not present any cross-striation, but contain a pyriform nucleus, which almost completely fills the cell.
From the inner extremity of the granule a thick process passes into the outer plexiform layer, and there expands into a pyramidal enlargement or foot plate, from which are given off numerous fine fibrils, that come in contact with the outer processes of the cone bipolars.
Document 4:::
The glomus body is not to be confused with the glomus cell which is a kind of chemoreceptor found in the carotid bodies and aortic bodies.
A glomus body (or glomus organ) is a component of the dermis layer of the skin, involved in body temperature regulation. The glomus body is a small arteriovenous anastomosis surrounded by a capsule of connective tissue. Glomus bodies (glomera) are most numerous in the fingers and toes. The role of the glomus body is to shunt blood away (heat transfer) from the skin surface when exposed to cold temperature, thus preventing heat loss, and allowing maximum blood flow to the skin in warm weather to allow heat to dissipate. The glomus body has high sympathetic tone and potentiation leads to near complete vasoconstriction.
Endothelial cells form a single, continuous layer that lines all vascular segments. Junctional complexes keep the endothelial cells together in arteries but are less numerous in veins. The organization of the endothelial cell layer in capillaries can varies greatly, depending on the location.
The arteriovenous shunt of the glomus body is a normal anatomic shunt as opposed to an abnormal arteriovenous fistula. A metarteriole is another type.
See also
Glomus tumor
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the outer layer of the adrenal gland called?
A. zona reticularis
B. adrenal skin
C. adrenal cortex
D. medulla
Answer:
|
|
sciq-7667
|
multiple_choice
|
What type of feedback intensifies a change in the body’s physiological condition rather than reversing it?
|
[
"negative feedback",
"susceptible feedback",
"positive feedback",
"neutral feedback"
] |
C
|
Relavent Documents:
Document 0:::
Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. A classic example of negative feedback is a heating system thermostat — when the temperature gets high enough, the heater is turned OFF. When the temperature gets too cold, the heat is turned back ON. In each case the "feedback" generated by the thermostat "negates" the trend.
The opposite tendency — called positive feedback — is when a trend is positively reinforced, creating amplification, such as the squealing "feedback" loop that can occur when a mic is brought too close to a speaker which is amplifying the very sounds the mic is picking up, or the runaway heating and ultimate meltdown of a nuclear reactor.
Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing, can be very stable, accurate, and responsive.
Negative feedback is widely used in mechanical and electronic engineering, and also within living organisms, and can be seen in many other fields from chemistry and economics to physical systems such as the climate. General negative feedback systems are studied in control systems engineering.
Negative feedback loops also play an integral role in maintaining the atmospheric balance in various systems on Earth. One such feedback system is the interaction between solar radiation, cloud cover, and planet temperature.
General description
In many physical and biological systems, qualitatively different influences can oppose each other. For example, in biochemistry, one set of chemicals drives the syst
Document 1:::
Positive feedback (exacerbating feedback, self-reinforcing feedback) is a process that occurs in a feedback loop which exacerbates the effects of a small disturbance. That is, the effects of a perturbation on a system include an increase in the magnitude of the perturbation. That is, A produces more of B which in turn produces more of A. In contrast, a system in which the results of a change act to reduce or counteract it has negative feedback. Both concepts play an important role in science and engineering, including biology, chemistry, and cybernetics.
Mathematically, positive feedback is defined as a positive loop gain around a closed loop of cause and effect.
That is, positive feedback is in phase with the input, in the sense that it adds to make the input larger.
Positive feedback tends to cause system instability. When the loop gain is positive and above 1, there will typically be exponential growth, increasing oscillations, chaotic behavior or other divergences from equilibrium. System parameters will typically accelerate towards extreme values, which may damage or destroy the system, or may end with the system latched into a new stable state. Positive feedback may be controlled by signals in the system being filtered, damped, or limited, or it can be cancelled or reduced by adding negative feedback.
Positive feedback is used in digital electronics to force voltages away from intermediate voltages into '0' and '1' states. On the other hand, thermal runaway is a type of positive feedback that can destroy semiconductor junctions. Positive feedback in chemical reactions can increase the rate of reactions, and in some cases can lead to explosions. Positive feedback in mechanical design causes tipping-point, or 'over-centre', mechanisms to snap into position, for example in switches and locking pliers. Out of control, it can cause bridges to collapse. Positive feedback in economic systems can cause boom-then-bust cycles. A familiar example of positive feedback
Document 2:::
The terms closed system and open system have long been defined in the widely (and long before any sort of amplifier was invented) established subject of thermodynamics, in terms that have nothing to do with the concepts of feedback and feedforward. The terms 'feedforward' and 'feedback' arose first in the 1920s in the theory of amplifier design, more recently than the thermodynamic terms. Negative feedback was eventually patented by H.S Black in 1934. In thermodynamics, an open system is one that can take in and give out ponderable matter. In thermodynamics, a closed system is one that cannot take in or give out ponderable matter, but may be able to take in or give out radiation and heat and work or any form of energy. In thermodynamics, a closed system can be further restricted, by being 'isolated': an isolated system cannot take in nor give out either ponderable matter or any form of energy. It does not make sense to try to use these well established terms to try to distinguish the presence or absence of feedback in a control system.
The theory of control systems leaves room for systems with both feedforward pathways and feedback elements or pathways. The terms 'feedforward' and 'feedback' refer to elements or paths within a system, not to a system as a whole. THE input to the system comes from outside it, as energy from the signal source by way of some possibly leaky or noisy path. Part of the output of a system can be compounded, with the intermediacy of a feedback path, in some way such as addition or subtraction, with a signal derived from the system input, to form a 'return balance signal' that is input to a PART of the system to form a feedback loop within the system. (It is not correct to say that part of the output of a system can be used as THE input to the system.)
There can be feedforward paths within the system in parallel with one or more of the feedback loops of the system so that the system output is a compound of the outputs of the feedback loops
Document 3:::
Cue reactivity is a type of learned response which is observed in individuals with an addiction and involves significant physiological and subjective reactions to presentations of drug-related stimuli (i.e., drug cues).
In investigations of these reactions in people with substance use disorders, changes in self-reported drug craving, physiological responses, and drug use are monitored as they are exposed to drug-related cues (e.g., cigarettes, bottles of alcohol, drug paraphernalia) or drug-neutral cues (e.g., pencils, glasses of water, a set of car keys).
Scientific theory
Cue reactivity is considered a risk factor for recovering addicts to relapse. There are two general types of cues: discrete which includes the substance itself and contextual which includes environments in which the substance is found. For example, for an alcoholic an alcoholic beverage would be a discrete cue and a bar would be a contextual cue. There are many different reactions to cues including withdrawal-like responses, opponent process responses, and substance-like responses.
A meta-analysis of 41 cue reactivity studies with people that have an alcohol, heroin, or cocaine addiction strongly supports the finding that people who have addictions have significant cue-specific reactions to drug-related stimuli. In general, these individuals, regardless of drug of abuse, report robust increases in craving and exhibit modest changes in autonomic responses, such as increases in heart rate and skin conductance and decreases in skin temperature, when exposed to drug-related versus neutral stimuli. Surprisingly, despite their obvious clinical relevance, drug use or drug-seeking behaviors are seldom measured in cue reactivity studies. However, when drug-use measures are used in cue reactivity studies the typical finding is a modest increase in drug-seeking or drug-use behavior.
Development
Clinical implications
Since people with substance use disorders are highly reactive to environmental cues pre
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of feedback intensifies a change in the body’s physiological condition rather than reversing it?
A. negative feedback
B. susceptible feedback
C. positive feedback
D. neutral feedback
Answer:
|
|
sciq-5855
|
multiple_choice
|
The surfaces of the bones at the joints are covered with a smooth layer of cartilage, which reduces what force between the bones when they move?
|
[
"expulsion",
"friction",
"vibration",
"gravity"
] |
B
|
Relavent Documents:
Document 0:::
The American Society of Biomechanics (ASB) is a scholarly society that focuses on biomechanics across a variety of academic fields. It was founded in 1977 by a group of scientists and clinicians. The ASB holds an annual conference as an arena to disseminate and learn about the most recent progress in the field, to distribute awards to recognize excellent work, and to engage in public outreach to expand the impact of its members.
Conferences
The society hosts an annual conference that takes place in North America (usually USA). These conferences are periodically joint conferences held in conjunction with the International Society of Biomechanics (ISB), the North American Congress on Biomechanics (NACOB), and the World Congress of Biomechanics (WCB). The annual conference, when not partnered with another conference, receives around 700 to 800 abstract submissions per year, with attendees in approximately the same numbers. The first conference was held in 1977.
Often, work presented at these conferences achieves media attention due to the ‘public interest’ nature of the findings or that new devices are introduced there. Examples include:
the effect of tablet reading on cervical spine posture;
the squeak of the basketball shoe;
‘underwear’ to address back-pain;
recovery after exercise;
exoskeleton boots for joint pain during exercise;
how flamingos stand on one leg.
National Biomechanics Day
The ASB is instrumental in promoting National Biomechanics Day (NBD), which has received international recognition.
In New Zealand, Massey University attracted NZ$48,000 of national funding
through the Unlocking Curious Minds programme to promote National Biomechanics Day, with the aim to engage 1,100 students from lower-decile schools in an experiential learning day focused on the science of biomechanics.
It was first held in 2016 on April 7, and consisted of ‘open house’ visits from middle and high school students to biomechanics research and teaching laboratories a
Document 1:::
Canadian Society for Biomechanics / Société canadienne de biomécanique (CSB/SCB) was formed in 1973. The CSB is an Affiliated Society with the International Society of Biomechanics (ISB).
The purpose of the Society is to foster research and the interchange of information on the biomechanics of human physical activity.
Biomechanics research is being performed more and more by people from diverse disciplinary and professional backgrounds. CSB/SCB is attempting to enhance interdisciplinary communication and thereby improve the quality of biomechanics research and facilitate application of findings by bringing together therapists, physicians, engineers, sport researchers, ergonomists, and others who are using the same pool of basic biomechanics techniques but studying different human movement problems.
External links
Canadian Society for Biomechanics Official Site
International Society of Biomechanics Official Site
Canadian Society for Biomechanics podcast
Biomechanics
Professional associations based in Canada
Document 2:::
John Rasmussen is a professor of biomechanics at Aalborg University. His research is aimed both at solid mechanics, biomechanics, biomedical engineering and sports engineering.
Education and research
John Rasmussen was educated at Aalborg University, where he graduated as MA in 1986, and three years later received his PhD in computer-aided engineering. In addition to his academic work, John Rasmussen acted as chief executive officer at AnyBody Technology A/S from 2001 to 2008 and subsequently became the CTO of the same company. Furthermore, Rasmussen publishes research on a personal blog.
Rasmussen's research has influenced the theoretical field of structural optimisation in the late 1980s by applying the finite element method as the engine of his research, thus enabling optimisation of practical structures rather than conducting academic examples.
In the late 1990s, he formed the AnyBody Research Project at Aalborg University, which he is still a leading part of. One of the goals of his research is to develop methods for analysing the biomechanics of the human body involving bones, joints, muscles and tendons.
His research has contributed to the treatment of osteoarthritis, general disability and the optimisation of sports performances. Furthermore, John Rasmussen has initiated a new branch of interdisciplinary biomechanics in cooperation with Aalborg University’s Laboratory for Stem Cell Research in the aetiology of pressure ulcers, which has led to new results in tissue engineering.
Document 3:::
Soft tissue is all the tissue in the body that is not hardened by the processes of ossification or calcification such as bones and teeth. Soft tissue connects, surrounds or supports internal organs and bones, and includes muscle, tendons, ligaments, fat, fibrous tissue, lymph and blood vessels, fasciae, and synovial membranes.
It is sometimes defined by what it is not – such as "nonepithelial, extraskeletal mesenchyme exclusive of the reticuloendothelial system and glia".
Composition
The characteristic substances inside the extracellular matrix of soft tissue are the collagen, elastin and ground substance. Normally the soft tissue is very hydrated because of the ground substance. The fibroblasts are the most common cell responsible for the production of soft tissues' fibers and ground substance. Variations of fibroblasts, like chondroblasts, may also produce these substances.
Mechanical characteristics
At small strains, elastin confers stiffness to the tissue and stores most of the strain energy. The collagen fibers are comparatively inextensible and are usually loose (wavy, crimped). With increasing tissue deformation the collagen is gradually stretched in the direction of deformation. When taut, these fibers produce a strong growth in tissue stiffness. The composite behavior is analogous to a nylon stocking, whose rubber band does the role of elastin as the nylon does the role of collagen. In soft tissues, the collagen limits the deformation and protects the tissues from injury.
Human soft tissue is highly deformable, and its mechanical properties vary significantly from one person to another. Impact testing results showed that the stiffness and the damping resistance of a test subject’s tissue are correlated with the mass, velocity, and size of the striking object. Such properties may be useful for forensics investigation when contusions were induced. When a solid object impacts a human soft tissue, the energy of the impact will be absorbed by the tissues
Document 4:::
Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques.
Basics
Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy.
The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The surfaces of the bones at the joints are covered with a smooth layer of cartilage, which reduces what force between the bones when they move?
A. expulsion
B. friction
C. vibration
D. gravity
Answer:
|
|
sciq-11454
|
multiple_choice
|
During interphase of what process, each chromosome is duplicated, and the sister chromatids formed during synthesis are held together at the centromere region by cohesin proteins?
|
[
"meiosis",
"mitosis",
"digestion",
"apoptosis"
] |
A
|
Relavent Documents:
Document 0:::
Chromosome segregation is the process in eukaryotes by which two sister chromatids formed as a consequence of DNA replication, or paired homologous chromosomes, separate from each other and migrate to opposite poles of the nucleus. This segregation process occurs during both mitosis and meiosis. Chromosome segregation also occurs in prokaryotes. However, in contrast to eukaryotic chromosome segregation, replication and segregation are not temporally separated. Instead segregation occurs progressively following replication.
Mitotic chromatid segregation
During mitosis chromosome segregation occurs routinely as a step in cell division (see mitosis diagram). As indicated in the mitosis diagram, mitosis is preceded by a round of DNA replication, so that each chromosome forms two copies called chromatids. These chromatids separate to opposite poles, a process facilitated by a protein complex referred to as cohesin. Upon proper segregation, a complete set of chromatids ends up in each of two nuclei, and when cell division is completed, each DNA copy previously referred to as a chromatid is now called a chromosome.
Meiotic chromosome and chromatid segregation
Chromosome segregation occurs at two separate stages during meiosis called anaphase I and anaphase II (see meiosis diagram). In a diploid cell there are two sets of homologous chromosomes of different parental origin (e.g. a paternal and a maternal set). During the phase of meiosis labeled “interphase s” in the meiosis diagram there is a round of DNA replication, so that each of the chromosomes initially present is now composed of two copies called chromatids. These chromosomes (paired chromatids) then pair with the homologous chromosome (also paired chromatids) present in the same nucleus (see prophase I in the meiosis diagram). The process of alignment of paired homologous chromosomes is called synapsis (see Synapsis). During synapsis, genetic recombination usually occurs. Some of the recombination even
Document 1:::
Sister chromatid cohesion refers to the process by which sister chromatids are paired and held together during certain phases of the cell cycle. Establishment of sister chromatid cohesion is the process by which chromatin-associated cohesin protein becomes competent to physically bind together the sister chromatids. In general, cohesion is established during S phase as DNA is replicated, and is lost when chromosomes segregate during mitosis and meiosis. Some studies have suggested that cohesion aids in aligning the kinetochores during mitosis by forcing the kinetochores to face opposite cell poles.
Cohesin loading
Cohesin first associates with the chromosomes during G1 phase. The cohesin ring is composed of two SMC (structural maintenance of chromosomes) proteins and two additional Scc proteins. Cohesin may originally interact with chromosomes via the ATPase domains of the SMC proteins. In yeast, the loading of cohesin on the chromosomes depends on proteins Scc2 and Scc4.
Cohesin interacts with the chromatin at specific loci. High levels of cohesin binding are observed at the centromere. Cohesin is also loaded at cohesin attachment regions (CARs) along the length of the chromosomes. CARs are approximately 500-800 base pair regions spaced at approximately 9 kilobase intervals along the chromosomes. In yeast, CARs tend to be rich in adenine-thymine base pairs. CARs are independent of origins of replication.
Establishment of cohesion
Establishment of cohesion refers to the process by which chromatin-associated cohesin becomes cohesion-competent. Chromatin association of cohesin is not sufficient for cohesion. Cohesin must undergo subsequent modification ("establishment") to be capable of physically holding the sister chromosomes together. Though cohesin can associate with chromatin earlier in the cell cycle, cohesion is established during S phase. Early data suggesting that S phase is crucial to cohesion was based on the fact that after S phase, sister chromatids
Document 2:::
Interkinesis or interphase II is a period of rest that cells of some species enter during meiosis between meiosis I and meiosis II. No DNA replication occurs during interkinesis; however, replication does occur during the interphase I stage of meiosis (See meiosis I). During interkinesis, the spindles of the first meiotic division disassembles and the microtubules reassemble into two new spindles for the second meiotic division. Interkinesis follows telophase I; however, many plants skip telophase I and interkinesis, going immediately into prophase II. Each chromosome still consists of two chromatids. In this stage other organelle number may also increase.
Document 3:::
A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932.
Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome.
Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis.
Structure of Kinetochore
The kinetochore contains two regions:
an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists t
Document 4:::
Prophase () is the first stage of cell division in both mitosis and meiosis. Beginning after interphase, DNA has already been replicated when the cell enters prophase. The main occurrences in prophase are the condensation of the chromatin reticulum and the disappearance of the nucleolus.
Staining and microscopy
Microscopy can be used to visualize condensed chromosomes as they move through meiosis and mitosis.
Various DNA stains are used to treat cells such that condensing chromosomes can be visualized as the move through prophase.
The giemsa G-banding technique is commonly used to identify mammalian chromosomes, but utilizing the technology on plant cells was originally difficult due to the high degree of chromosome compaction in plant cells. G-banding was fully realized for plant chromosomes in 1990. During both meiotic and mitotic prophase, giemsa staining can be applied to cells to elicit G-banding in chromosomes. Silver staining, a more modern technology, in conjunction with giesma staining can be used to image the synaptonemal complex throughout the various stages of meiotic prophase. To perform G-banding, chromosomes must be fixed, and thus it is not possible to perform on living cells.
Fluorescent stains such as DAPI can be used in both live plant and animal cells. These stains do not band chromosomes, but instead allow for DNA probing of specific regions and genes. Use of fluorescent microscopy has vastly improved spatial resolution.
Mitotic prophase
Prophase is the first stage of mitosis in animal cells, and the second stage of mitosis in plant cells. At the start of prophase there are two identical copies of each chromosome in the cell due to replication in interphase. These copies are referred to as sister chromatids and are attached by DNA element called the centromere. The main events of prophase are: the condensation of chromosomes, the movement of the centrosomes, the formation of the mitotic spindle, and the beginning of nucleoli break down.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
During interphase of what process, each chromosome is duplicated, and the sister chromatids formed during synthesis are held together at the centromere region by cohesin proteins?
A. meiosis
B. mitosis
C. digestion
D. apoptosis
Answer:
|
|
sciq-3623
|
multiple_choice
|
Falling onto what joint can fracture the distal humerus?
|
[
"elbow",
"Foot",
"thumb",
"knee"
] |
A
|
Relavent Documents:
Document 0:::
Epicondylitis is the inflammation of an epicondyle or of adjacent tissues. Epicondyles are on the medial and lateral aspects of the elbow, consisting of the two bony prominences at the distal end of the humerus. These bony projections serve as the attachment point for the forearm musculature. Inflammation to the tendons and muscles at these attachment points can lead to medial and/or lateral epicondylitis. This can occur through a range of factors that overuse the muscles that attach to the epicondyles, such as sports or job-related duties that increase the workload of the forearm musculature and place stress on the elbow. Lateral epicondylitis is also known as “Tennis Elbow” due to its sports related association to tennis athletes, while medial epicondylitis is often referred to as “golfer's elbow.”
Risk factors
In a cross-sectional population-based study among the working population, it was found that psychological distress and bending and straightening of the elbow joint for >1hr per day were associated risk factors to epicondylitis.
Another study revealed the following potential risk factors among the working population:
Force and repetitive motions (handling tools > 1 kg, handling loads >20 kg at least 10 times/day, repetitive movements > 2 h/day) were found to be associated with the occurrence of lateral epicondylitis.
Low job control and low social support were also found to be associated with lateral epicondylitis.
Exposures of force (handling loads >5 kg, handling loads >20 kg at least 10 times/day, high hand grip forces >1 h/day), repetitiveness (repetitive movements for >2 h/day) and vibration (working with vibrating tools > 2 h/day) were associated with medial epicondylitis.
In addition to repetitive activities, obesity and smoking have been implicated as independent risk factors.
Symptoms
Tender to palpation at the medial or lateral epicondyle
Pain or difficulty with wrist flexion or extension
Diminished grip strength
Pain or burning se
Document 1:::
The Winquist and Hansen classification is a system of categorizing femoral shaft fractures based upon the degree of comminution.
Classification
Document 2:::
A rib fracture is a break in a rib bone. This typically results in chest pain that is worse with inspiration. Bruising may occur at the site of the break. When several ribs are broken in several places a flail chest results. Potential complications include a pneumothorax, pulmonary contusion, and pneumonia.
Rib fractures usually occur from a direct blow to the chest such as during a motor vehicle collision or from a crush injury. Coughing or metastatic cancer may also result in a broken rib. The middle ribs are most commonly fractured. Fractures of the first or second ribs are more likely to be associated with complications. Diagnosis can be made based on symptoms and supported by medical imaging.
Pain control is an important part of treatment. This may include the use of paracetamol (acetaminophen), NSAIDs, or opioids. A nerve block may be another option. While fractured ribs can be wrapped, this may increase complications. In those with a flail chest, surgery may improve outcomes. They are a common injury following trauma.
Signs and symptoms
This typically results in chest pain that is worse with inspiration. Bruising may occur at the site of the break.
Complications
When several ribs are broken in several places a flail chest results. Potential complications include a pneumothorax, pulmonary contusion, and pneumonia.
Causes
Rib fractures can occur with or without direct trauma during recreational activity. Cardiopulmonary resuscitation (CPR) has also been known to cause thoracic injury, including but not limited to rib and sternum fractures. They can also occur as a consequence of diseases such as cancer or rheumatoid arthritis. While for elderly individuals a fall can cause a rib fracture, in adults automobile accidents are a common event for such an injury.
Diagnosis
Signs of a broken rib may include:
Pain on inhalation
Swelling in chest area
Bruise in chest area
Increasing shortness of breath
Coughing up blood (rib may have damaged lung)
Plain X-ra
Document 3:::
The Gosselin fracture is a V-shaped fracture of the distal tibia which extends into the ankle joint and fractures the tibial plafond into anterior and posterior fragments.
The fracture was described by Leon Athanese Gosselin, chief of surgery at the Hôpital de la Charité in Paris.
Document 4:::
Ulnar collateral ligament injuries can occur during certain activities such as overhead baseball pitching. Acute or chronic disruption of the ulnar collateral ligament result in medial elbow pain, valgus instability, and impaired throwing performance. There are both non-surgical and surgical treatment options.
Signs and symptoms
Pain along the inside of the elbow is the main symptom of this condition. Throwing athletes report it occurs most often during the acceleration phase of throwing. The injury is often associated with an experience of a sharp “pop” in the elbow, followed by pain during a single throw. In addition, swelling and bruising of the elbow, loss of elbow range of motion, and a sudden decrease in throwing velocity are all common symptoms of a UCL injury. If the injury is less severe, pain can alleviate with complete rest.
Causes
The UCL stabilizes the elbow from being abducted during a throwing motion. If intense or repeated bouts of valgus stress occur on the UCL, injury may occur. Damage to the UCL is common among baseball pitchers and javelin throwers because the throwing motion is similar. Physicians believe repetitive movements, especially pitching in baseball, cause UCL injuries. Furthermore, physicians have stated that if an adolescent throws over 85 throws for 8 months or more in a year, or throws when exhausted, the adolescent has a significantly higher risk of UCL injury.
Gridiron football, racquet sports, ice hockey and water polo players have also been treated for damage to the UCL. Specific overhead movements like those that occur during baseball pitching, tennis serving or volleyball spiking increase the risk of UCL injury. During the cocking phase of pitching, the shoulder is horizontally abducted, externally rotated and the elbow is flexed. There is slight stress on the UCL in this position but it increases when the shoulder is further externally rotated in a throw. The greater the stress the more the UCL is stretched causing stra
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Falling onto what joint can fracture the distal humerus?
A. elbow
B. Foot
C. thumb
D. knee
Answer:
|
|
sciq-1306
|
multiple_choice
|
What are the key cells of an immune response?
|
[
"keratinocytes",
"histiocytes",
"erythrocytes",
"lymphocytes"
] |
D
|
Relavent Documents:
Document 0:::
This is a list of Immune cells, also known as white blood cells, white cells, leukocytes, or leucocytes. They are cells involved in protecting the body against both infectious disease and foreign invaders.
Document 1:::
White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood
Document 2:::
The adaptive immune system, also known as the acquired immune system, or specific immune system is a subsystem of the immune system that is composed of specialized, systemic cells and processes that eliminate pathogens or prevent their growth. The acquired immune system is one of the two main immunity strategies found in vertebrates (the other being the innate immune system).
Like the innate system, the adaptive immune system includes both humoral immunity components and cell-mediated immunity components and destroys invading pathogens. Unlike the innate immune system, which is pre-programmed to react to common broad categories of pathogen, the adaptive immune system is highly specific to each particular pathogen the body has encountered.
Adaptive immunity creates immunological memory after an initial response to a specific pathogen, and leads to an enhanced response to future encounters with that pathogen. Antibodies are a critical part of the adaptive immune system. Adaptive immunity can provide long-lasting protection, sometimes for the person's entire lifetime. For example, someone who recovers from measles is now protected against measles for their lifetime; in other cases it does not provide lifetime protection, as with chickenpox. This process of adaptive immunity is the basis of vaccination.
The cells that carry out the adaptive immune response are white blood cells known as lymphocytes. B cells and T cells, two different types of lymphocytes, carry out the main activities: antibody responses, and cell-mediated immune response. In antibody responses, B cells are activated to secrete antibodies, which are proteins also known as immunoglobulins. Antibodies travel through the bloodstream and bind to the foreign antigen causing it to inactivate, which does not allow the antigen to bind to the host. Antigens are any substances that elicit the adaptive immune response. Sometimes the adaptive system is unable to distinguish harmful from harmless foreign molecule
Document 3:::
A lymphocyte is a type of white blood cell (leukocyte) in the immune system of most vertebrates. Lymphocytes include T cells (for cell-mediated, cytotoxic adaptive immunity), B cells (for humoral, antibody-driven adaptive immunity), and Innate lymphoid cells (ILCs) ("innate T cell-like" cells involved in mucosal immunity and homeostasis), of which natural killer cells are an important subtype (which functions in cell-mediated, cytotoxic innate immunity). They are the main type of cell found in lymph, which prompted the name "lymphocyte" (with cyte meaning cell). Lymphocytes make up between 18% and 42% of circulating white blood cells.
Types
The three major types of lymphocyte are T cells, B cells and natural killer (NK) cells. Lymphocytes can be identified by their large nucleus.
T cells and B cells
T cells (thymus cells) and B cells (bone marrow- or bursa-derived cells) are the major cellular components of the adaptive immune response. T cells are involved in cell-mediated immunity, whereas B cells are primarily responsible for humoral immunity (relating to antibodies). The function of T cells and B cells is to recognize specific "non-self" antigens, during a process known as antigen presentation. Once they have identified an invader, the cells generate specific responses that are tailored maximally to eliminate specific pathogens or pathogen-infected cells. B cells respond to pathogens by producing large quantities of antibodies which then neutralize foreign objects like bacteria and viruses. In response to pathogens some T cells, called T helper cells, produce cytokines that direct the immune response, while other T cells, called cytotoxic T cells, produce toxic granules that contain powerful enzymes which induce the death of pathogen-infected cells. Following activation, B cells and T cells leave a lasting legacy of the antigens they have encountered, in the form of memory cells. Throughout the lifetime of an animal, these memory cells will "remember" each s
Document 4:::
T cells are one of the important types of white blood cells of the immune system and play a central role in the adaptive immune response. T cells can be distinguished from other lymphocytes by the presence of a T-cell receptor (TCR) on their cell surface.
T cells are born from hematopoietic stem cells, found in the bone marrow. Developing T cells then migrate to the thymus gland to develop (or mature). T cells derive their name from the thymus. After migration to the thymus, the precursor cells mature into several distinct types of T cells. T cell differentiation also continues after they have left the thymus. Groups of specific, differentiated T cell subtypes have a variety of important functions in controlling and shaping the immune response.
One of these functions is immune-mediated cell death, and it is carried out by two major subtypes: CD8+ "killer" (cytotoxic) and CD4+ "helper" T cells. (These are named for the presence of the cell surface proteins CD8 or CD4.) CD8+ T cells, also known as "killer T cells", are cytotoxic – this means that they are able to directly kill virus-infected cells, as well as cancer cells. CD8+ T cells are also able to use small signalling proteins, known as cytokines, to recruit other types of cells when mounting an immune response. A different population of T cells, the CD4+ T cells, function as "helper cells". Unlike CD8+ killer T cells, the CD4+ helper T (TH) cells function by further activating memory B cells and cytotoxic T cells, which leads to a larger immune response. The specific adaptive immune response regulated by the TH cell depends on its subtype (such as T-helper1, T-helper2, T-helper17, regulatory T-cell), which is distinguished by the types of cytokines they secrete.
Regulatory T cells are yet another distinct population of T cells that provide the critical mechanism of tolerance, whereby immune cells are able to distinguish invading cells from "self". This prevents immune cells from inappropriately reacting again
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the key cells of an immune response?
A. keratinocytes
B. histiocytes
C. erythrocytes
D. lymphocytes
Answer:
|
|
sciq-10147
|
multiple_choice
|
When matter changes into an entirely different substance with different chemical properties, what has occurred?
|
[
"chemical change",
"mechanical change",
"physical change",
"gaseous change"
] |
A
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
A combustible material is a material that can burn (i.e., sustain a flame) in air under certain conditions. A material is flammable if it ignites easily at ambient temperatures. In other words, a combustible material ignites with some effort and a flammable material catches fire immediately on exposure to flame.
The degree of flammability in air depends largely upon the volatility of the material - this is related to its composition-specific vapour pressure, which is temperature dependent. The quantity of vapour produced can be enhanced by increasing the surface area of the material forming a mist or dust. Take wood as an example. Finely divided wood dust can undergo explosive flames and produce a blast wave. A piece of paper (made from wood) catches on fire quite easily. A heavy oak desk is much harder to ignite, even though the wood fibre is the same in all three materials.
Common sense (and indeed scientific consensus until the mid-1700s) would seem to suggest that material "disappears" when burned, as only the ash is left. In fact, there is an increase in weight because the flammable material reacts (or combines) chemically with oxygen, which also has mass. The original mass of flammable material and the mass of the oxygen required for flames equals the mass of the flame products (ash, water, carbon dioxide, and other gases). Antoine Lavoisier, one of the pioneers in these early insights, stated that Nothing is lost, nothing is created, everything is transformed, which would later be known as the law of conservation of mass. Lavoisier used the experimental fact that some metals gained mass when they burned to support his ideas.
Definitions
Historically, flammable, inflammable and combustible meant capable of burning. The word "inflammable" came through French from the Latin inflammāre = "to set fire to", where the Latin preposition "in-" means "in" as in "indoctrinate", rather than "not" as in "invisible" and "ineligible".
The word "inflammable" may be er
Document 2:::
In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids.
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them).
Characteristics of mixtures
All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways:
the substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation.
there is little or no energy change when a mixture forms (see Enthalpy of mixing).
The substances in a mixture keep its separate properties.
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
mixtures have variable compositions, while compounds have a fixed, definite formula.
when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When matter changes into an entirely different substance with different chemical properties, what has occurred?
A. chemical change
B. mechanical change
C. physical change
D. gaseous change
Answer:
|
|
sciq-11260
|
multiple_choice
|
How do cancer cells typically spread from one part of the body to another?
|
[
"bloodstream",
"liver",
"kidneys",
"plasma"
] |
A
|
Relavent Documents:
Document 0:::
The collective–amoeboid transition (CMT) is a process by which collective multicellular groups dissociate into amoeboid single cells following the down-regulation of integrins. CMTs contrast with epithelial–mesenchymal transitions (EMT) which occur following a loss of E-cadherin. Like EMTs, CATs are involved in the invasion of tumor cells into surrounding tissues, with amoeboid movement more likely to occur in soft extracellular matrix (ECM) and mesenchymal movement in stiff ECM. Although once differentiated, cells typically do not change their migration mode, EMTs and CMTs are highly plastic with cells capable of interconverting between them depending on intracelluar regulatory signals and the surrounding ECM.
CATs are the least common transition type in invading tumor cells, although they are noted in melanoma explants.
See also
Collective cell migration
Dedifferentiation
Invasion (cancer)
Document 1:::
CNS metastasis is the spread and proliferation of cancer cells from their original tumour to form secondary tumours in portions of the central nervous system.
The process of tumour cells invading distant tissue is complex and obscure, but modern technology has permitted an enhanced detection of metastasis. Currently, the diagnosis of central nervous system, or CNS, metastasis involves high-scale imaging to produce high-definition images of internal organs for analysis. This aids doctors and clinicians in prescribing suitable therapeutic methods, though there is yet to be a perfect treatment or preventative measure.
Mechanism
CNS metastasis is the spread and proliferation of cancer cells from their original tumour to form secondary tumours in portions of the CNS. Typically, this progression initiates when tumour cells separate from the primary tumour and insert into the bloodstream or the lymph system via intravasation. Intravasation into the circulatory system allows the tumour cells to travel and colonise distant sites such as the brain, a major structure of the CNS, forming a secondary brain tumour. However, CNS metastasis only occurs when genetically unstable cancers can adapt to foreign tissue native to the CNS environments, but dissimilar from the original tumour. Subsequently, metastasised cells assume new genomic phenotypes, while dropping unfavourable characteristics, once cells disassociate from the primary lesion. This is particularly crucial for the formation of CNS metastasis, as the tumour cells require characteristics favourable for the disruption of the blood-brain barrier, allowing them to transverse.
Recent evidence demonstrates that the dissemination of cells from the primary tumour is not sequential but consists of overlapping processes and routes. This includes the tumour cells invading and colluding with tissue stroma while adapting to evade immune surveillance by suppressive inhibition of regular cellular anti-tumourigenic properties. The
Document 2:::
Metastasis is a pathogenic agent's spread from an initial or primary site to a different or secondary site within the host's body; the term is typically used when referring to metastasis by a cancerous tumor. The newly pathological sites, then, are metastases (mets). It is generally distinguished from cancer invasion, which is the direct extension and penetration by cancer cells into neighboring tissues.
Cancer occurs after cells are genetically altered to proliferate rapidly and indefinitely. This uncontrolled proliferation by mitosis produces a primary heterogeneic tumour. The cells which constitute the tumor eventually undergo metaplasia, followed by dysplasia then anaplasia, resulting in a malignant phenotype. This malignancy allows for invasion into the circulation, followed by invasion to a second site for tumorigenesis.
Some cancer cells known as circulating tumor cells acquire the ability to penetrate the walls of lymphatic or blood vessels, after which they are able to circulate through the bloodstream to other sites and tissues in the body. This process is known (respectively) as lymphatic or hematogenous spread. After the tumor cells come to rest at another site, they re-penetrate the vessel or walls and continue to multiply, eventually forming another clinically detectable tumor. This new tumor is known as a metastatic (or secondary) tumor. Metastasis is one of the hallmarks of cancer, distinguishing it from benign tumors. Most cancers can metastasize, although in varying degrees. Basal cell carcinoma for example rarely metastasizes.
When tumor cells metastasize, the new tumor is called a secondary or metastatic tumor, and its cells are similar to those in the original or primary tumor. This means that if breast cancer metastasizes to the lungs, the secondary tumor is made up of abnormal breast cells, not of abnormal lung cells. The tumor in the lung is then called metastatic breast cancer, not lung cancer. Metastasis is a key element in cancer sta
Document 3:::
A circulating tumor cell (CTC) is a cell that has shed into the vasculature or lymphatics from a primary tumor and is carried around the body in the blood circulation. CTCs can extravasate and become seeds for the subsequent growth of additional tumors (metastases) in distant organs, a mechanism that is responsible for the vast majority of cancer-related deaths. The detection and analysis of CTCs can assist early patient prognoses and determine appropriate tailored treatments. Currently, there is one FDA-approved method for CTC detection, CellSearch, which is used to diagnose breast, colorectal and prostate cancer.
The detection of CTCs, or liquid biopsy, presents several advantages over traditional tissue biopsies. They are non-invasive, can be used repeatedly, and provide more useful information on metastatic risk, disease progression, and treatment effectiveness. For example, analysis of blood samples from cancer patients has found a propensity for increased CTC detection as the disease progresses. Blood tests are easy and safe to perform and multiple samples can be taken over time. By contrast, analysis of solid tumors necessitates invasive procedures that might limit patient compliance. The ability to monitor the disease progression over time could facilitate appropriate modification to a patient's therapy, potentially improving their prognosis and quality of life. The important aspect of the ability to prognose the future progression of the disease is elimination (at least temporarily) of the need for a surgery when the repeated CTC counts are low and not increasing; the obvious benefits of avoiding the surgery include avoiding the risk related to the innate tumor-genicity of cancer surgeries. To this end, technologies with the requisite sensitivity and reproducibility to detect CTCs in patients with metastatic disease have recently been developed. On the other hand, CTCs are very rare, often present as only a few cells per milliliter of blood, which makes th
Document 4:::
Blebbishield emergency program is a process which acts as a last line of defense for cancer stem cells after induction of apoptosis where the apoptotic blebs fuse to shield the cells/nucleus from the destructive force of apoptosis by forming blebbishields. Blebbishields in turn fuse to each other and generate cancer stem cell spheres/cellular transformation, essentially shifting the balance of dying cells back towards survival.
Discovery
Blebbishields were first identified in human bladder cancer cell line RT4 (HTB-2: ATCC), referred to as RT4P (RT4 parent) in the initial report.
Blebbishield formation
Every cell type, especially cancer cells, are capable of undergoing apoptosis, a process in which the plasma membrane undergoes blebbing followed by orderly deconstruction of cells into apoptotic bodies. Cancer stem cells have the extraordinary ability to construct blebbishields from these apoptotic bodies by bleb-bleb fusion and form stem cell spheres/cellular transformation by sub-sequent blebbishield-blebbishield fusion. Endocytosis and endocytosis-driven serpentine filopodia are necessary to tether and tie apoptotic bodies to facilitate fusion. The involvement of membrane fusion was confirmed by inhibiting cholesterol using the cholesterol antagonist Filipin-III.
Blebbishields and cancer stem cells
Sphere forming cells widely display characteristics of tumorigenesis. Cells from blebbishield derived spheres are tumorigenic in nature, providing an important clue for tumorigenesis. Blebbishield emergency program is postulated to have the strong rationale for bladder cancer recurrence as it is a potential cause for multifocal/satellite bladder tumors. The blebbishield derived cells exhibit strong drug resistance behavior and exhibit high sensitivity to Hoechst-33342 similar to side-population cells.
Positive and negative regulators of blebbishield survival
Caspases
Caspases (Caspase-3, caspase-8, caspase-9) are found to have important roles in contributing the fo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do cancer cells typically spread from one part of the body to another?
A. bloodstream
B. liver
C. kidneys
D. plasma
Answer:
|
|
sciq-11056
|
multiple_choice
|
What is cytology.
|
[
"the study of plants",
"the study of cancers",
"the study of cell structure",
"the study of atomic structure"
] |
C
|
Relavent Documents:
Document 0:::
Medical biology is a field of biology that has practical applications in medicine, health care and laboratory diagnostics. It includes many biomedical disciplines and areas of specialty that typically contains the "bio-" prefix such as:
molecular biology, biochemistry, biophysics, biotechnology, cell biology, embryology,
nanobiotechnology, biological engineering, laboratory medical biology,
cytogenetics, genetics, gene therapy,
bioinformatics, biostatistics, systems biology,
microbiology, virology, parasitology,
physiology, pathology,
toxicology, and many others that generally concern life sciences as applied to medicine.
Medical biology is the cornerstone of modern health care and laboratory diagnostics. It concerned a wide range of scientific and technological approaches: from an in vitro diagnostics to the in vitro fertilisation, from the molecular mechanisms of a cystic fibrosis to the population dynamics of the HIV, from the understanding molecular interactions to the study of the carcinogenesis, from a single-nucleotide polymorphism (SNP) to the gene therapy.
Medical biology based on molecular biology combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
See also
External links
Document 1:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 2:::
Cytotechnology is the microscopic interpretation of cells to detect cancer and other abnormalities. This includes the examination of samples collected from the uterine cervix (Pap test), lung, gastrointestinal tract, or body cavities.
A cytotechnologist is an allied health professional trained to evaluate specimens on glass slides using microscopes. In some laboratories, a computer performs an initial evaluation, pointing out areas that may be of particular interest for later examination. In many laboratories, cytotechnologists perform the initial evaluation. The cytotechnologist performs a secondary evaluation and determines whether a specimen is normal or abnormal. Abnormal specimens are referred to a pathologist for final interpretation or medical diagnosis.
Different countries have different certification requirements and standards for cytotechnologists. In the United States, there are currently two routes for certification: a person can first earn a bachelor's degree and then attend an accredited program in cytotechnology for one year, or they can attend a cytotechnology program that also awards a bachelor's degree. After successful completion of either route, the individual becomes eligible to take a certification exam offered by the American Society for Clinical Pathology. People who complete the requirements and pass the examination are entitled to designate themselves as "CT (ASCP)". The American Society for Cytotechnology (ASCT) sets U.S. professional standards, monitors legislative and regulatory issues, and provides education. Individual states regulate the licensure of cytotechnologists, usually following American Society of Cytopathology (ASC) guidelines.
The ASC is for cytopathologists but certain qualified cytotechnologists can join it as well.
See also
Gynaecologic cytology
Cytopathology
Document 3:::
Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and
Document 4:::
The following outline is provided as an overview of and topical guide to biophysics:
Biophysics – interdisciplinary science that uses the methods of physics to study biological systems.
Nature of biophysics
Biophysics is
An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.
A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force.
An interdisciplinary field – field of science that overlaps with other sciences
Scope of biophysics research
Biomolecular scale
Biomolecule
Biomolecular structure
Organismal scale
Animal locomotion
Biomechanics
Biomineralization
Motility
Environmental scale
Biophysical environment
Biophysics research overlaps with
Agrophysics
Biochemistry
Biophysical chemistry
Bioengineering
Biogeophysics
Nanotechnology
Systems biology
Branches of biophysics
Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general.
Medical biophysics – interdisciplinary field that applies me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is cytology.
A. the study of plants
B. the study of cancers
C. the study of cell structure
D. the study of atomic structure
Answer:
|
|
sciq-8818
|
multiple_choice
|
What size of ring will a summer drought cause in a tree?
|
[
"medium",
"smaller",
"giant",
"larger"
] |
B
|
Relavent Documents:
Document 0:::
Dendroclimatology is the science of determining past climates from trees (primarily properties of the annual tree rings). Tree rings are wider when conditions favor growth, narrower when times are difficult. Other properties of the annual rings, such as maximum latewood density (MXD) have been shown to be better proxies than simple ring width. Using tree rings, scientists have estimated many local climates for hundreds to thousands of years previous. By combining multiple tree-ring studies (sometimes with other climate proxy records), scientists have estimated past regional and global climates.
Advantages
Tree rings are especially useful as climate proxies in that they can be well-dated via dendrochronology, i.e. matching of the rings from sample to sample. This allows extension backwards in time using deceased tree samples, even using samples from buildings or from archeological digs. Another advantage of tree rings is that they are clearly demarked in annual increments, as opposed to other proxy methods such as boreholes. Furthermore, tree rings respond to multiple climatic effects (temperature, moisture, cloudiness), so that various aspects of climate (not just temperature) can be studied. However, this can be a double-edged sword.
Limitations
Along with the advantages of dendroclimatology are some limitations: confounding factors, geographic coverage, annular resolution, and collection difficulties. The field has developed various methods to partially adjust for these challenges.
Confounding factors
There are multiple climate and non-climate factors as well as nonlinear effects that impact tree ring width. Methods to isolate single factors (of interest) include botanical studies to calibrate growth influences and sampling of "limiting stands" (those expected to respond mostly to the variable of interest).
Climate factors
Climate factors that affect trees include temperature, precipitation, sunlight, and wind. To differentiate among these factors, sc
Document 1:::
The International Tree-Ring Data Bank (ITRDB) is a data repository for tree ring measurements that has been maintained since 1990 by the United States' National Oceanic and Atmospheric Administration Paleoclimatology Program and World Data Center for Paleoclimatology. The ITRDB was initially established by Hal Fritts through the Laboratory of Tree-Ring Research at the University of Arizona, with a grant from the US National Science Foundation, following the First International Workshop on Dendrochronology in 1974. The ITRDB accepts all tree ring data with sufficient metadata to be uploaded, but its founding focus was on tree ring measurements intended for climatic studies.
Specific information is required for uploading data to the database, such as the raw tree ring measurements, an indication of the type of measurement (full ring widths, earlywood, latewood), and the location. However, the types of data and the rules for accuracy and precision of the primary data, tree-ring width measurements, are decided by the dendrochronologists who are contributing the data, rather than by NOAA or any other governing organization.
See also
Dendrochronology
Document 2:::
The world's superlative trees can be ranked by any factor. Records have been kept for trees with superlative height, trunk diameter or girth, canopy coverage, airspace volume, wood volume, estimated mass, and age.
Tallest
The heights of the tallest trees in the world have been the subject of considerable dispute and much exaggeration. Modern verified measurements with laser rangefinders or with tape drop measurements made by tree climbers (such as those carried out by canopy researchers), have shown that some older tree height measurement methods are often unreliable, sometimes producing exaggerations of 5% to 15% or more above the real height. Historical claims of trees growing to , and even , are now largely disregarded as unreliable, and attributed to human error.
The following are the tallest reliably measured specimens from the top 10 species. This table shows only currently standing specimens:
Tallest historically
Despite the high heights attained by trees nowadays, records exist of much greater heights in the past, before widespread logging took place. Some, if not most, of these records are without a doubt greatly exaggerated, but some have been reportedly measured with semi-reliable instruments when cut down and on the ground. Some of the heights recorded in this way exceed the maximum possible height of a tree as calculated by theorists, lending some limited credibility to speculation that some superlative trees are able to 'reverse' transpiration streams and absorb water through needles in foggy environments. All three of the tallest tree species continue to be Coast redwoods, Douglas fir and Giant mountain ash.
Stoutest
The girth of a tree is usually much easier to measure than the height, as it is a simple matter of stretching a tape round the trunk, and pulling it taut to find the circumference. Despite this, UK tree author Alan Mitchell made the following comment about measurements of yew trees:
As a general standard, tree girth is taken at "b
Document 3:::
This is a list of the largest plants by clade. Measurements are based on height, volume, length, diameter, and weight, depending on the most appropriate way(s) of measurement for the clade.
Gymnosperms (Gymnospermae)
Conifers (Pinopsida)
The conifer division of plants include the tallest organism, and the largest single-stemmed plants by wood volume, wood mass, and main stem circumference. The largest by wood volume and mass is the giant sequoia (Sequoiadendron giganteum), native to Sierra Nevada and California; it grows to an average height of and in diameter. Specimens have been recorded up to in height and (not the same individual) in diameter; the largest individual still standing is the General Sherman tree, with a volume of .
Although typically not so large in volume, the closely related coast redwood (Sequoia sempervirens) of the Pacific coast in North America is taller, reaching a maximum height of – the Hyperion Tree, which ranks it as the world's tallest known living tree and organism (not including its roots under ground). The largest historical specimen (and largest known single-stem organism) was the Lindsey Creek tree, a coast redwood with a minimum trunk volume of over and a mass of over . It fell during a storm in 1905.
The conifers also include the largest tree by circumference in the world, the Montezuma cypress (Taxodium mucronatum). The thickest recorded tree, found in Mexico, is called Árbol del Tule, with a circumference of at its base and a diameter of at above ground level; its height is over . These trees dwarf any other non-communal organism, as even the largest blue whales are likely to weigh one-sixteenth as much as a large giant sequoia or coast redwood. See list of superlative trees for other tree records.
Cycads (Cycadophyta)
The largest single-stemmed species of cycad is Hope's cycad (Lepidozamia hopei), endemic to the Australian state of Queensland. The largest examples of this species have been over tall and have had
Document 4:::
The Laboratory of Tree-Ring Research (LTRR) was established in 1937 by A.E. Douglass, founder of the modern science of dendrochronology. The LTRR is a research unit in the College of Science at the University of Arizona in Tucson. Since its founding, visiting scholars and faculty at the lab have done notable work in the areas of climate change, fire history, ecology, archeology and hydrology.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What size of ring will a summer drought cause in a tree?
A. medium
B. smaller
C. giant
D. larger
Answer:
|
|
scienceQA-6110
|
multiple_choice
|
How long is a leather belt?
|
[
"34 inches",
"34 feet",
"34 miles",
"34 yards"
] |
A
|
The best estimate for the length of a leather belt is 34 inches.
34 feet, 34 yards, and 34 miles are all too long.
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
is a world mathematics certification program and examination established in Japan in 1988.
Outline of Suken
Each Suken level (Kyu) has two sections. Section 1 is calculation and Section 2 is application.
Passing Rate
In order to pass the Suken, you must correctly answer approximately 70% of section 1 and approximately 60% of section 2.
Levels
Level 5 (7th grade math)
The examination time is 180 minutes for section 1, 60 minutes for section 2.
Level 4 (8th grade)
The examination time is 60 minutes for section 1, 60 minutes for section 2.
3rd Kyu, suits for 9th grade
The examination time is 60 minutes for section 1, 60 minutes for section 2.
Levels 5 - 3 include the following subjects:
Calculation with negative numbers
Inequalities
Simultaneous equations
Congruency and similarities
Square roots
Factorization
Quadratic equations and functions
The Pythagorean theorem
Probabilities
Level pre-2 (10th grade)
The examination time is 60 minutes for section 1, 90 minutes for section 2.
Level 2 (11th grade)
The examination time is 60 minutes for section 1, 90 minutes for section 2.
Level pre-1st (12th grade)
The examination time is 60 minutes for section 1, 120 minutes for section 2.
Levels pre-2 - pre-1 include the following subjects:
Quadratic functions
Trigonometry
Sequences
Vectors
Complex numbers
Basic calculus
Matrices
Simple curved lines
Probability
Level 1 (undergrad and graduate)
The examination time is 60 minutes for section 1, 120 minutes for section 2.
Level 1 includes the following subjects:
Linear algebra
Vectors
Matrices
Differential equations
Statistics
Probability
Document 3:::
Dragon silk is a material created by Kraig Biocraft Laboratories of Ann Arbor, Michigan from genetically modified silkworms to create body armor. Dragon silk combines the elasticity and strength of spider silk. It has the tensile strength as high as 1.79 gigapascals (as much as 37%) and the elasticity above 38% exceeding the maximum reported features of the spider silk. It is reported that dragon silk is more flexible than the Monster silk and stronger than the "Big Red, recombinant spider silk designed for increased strength.
Properties
Mechanical properties
Dragon silk has properties higher than that of any other fiber ever noticed.
Tensile Strength
In comparison, Dragon silk's tensile strength is higher than that of steel(450-2000 MPa's). In a report it is said that the strength of Dragon silk is as high as 1.79 GPa's which is 37% higher than the widely reported spider silk. Its tensile strength is higher than the "Big Red silk," which had been reported as the strongest fiber ever made. "Bid Red Silk" was developed in the same Laboratories as Dragon Silk.
Flexibility
Dragon silk is far more flexible than Kevlar(the material used by US Army to develop body armor). Its flexibility is 38% higher than normal Spider silk and is noticeably more flexible than the "Monster silk" from the same lab. In percentage, Kevlar's flexibility is 3% and Dragon silk's flexibility is 30% to 40%.
History
In 2010, the scientists discovered the first spider silk, which was a great achievement, as it is one of the strongest natural fiber. But the problem was that spiders are cannibalistic and territorial, so it is impossible to create a cost-effective spider farm. To overcome this problem, scientists at Kraig Labs developed a method for making spider silk from silkworms. In 2011, Malcolm J. Fraser, Donald L. Jarvis and their colleagues published a study in which they describe how they remove silkworm silk making protein and replaced it with the spiders protein to build unique
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a leather belt?
A. 34 inches
B. 34 feet
C. 34 miles
D. 34 yards
Answer:
|
sciq-4144
|
multiple_choice
|
What do we call the theory of electromagnetism on the particle scale?
|
[
"gravity electrodynamics",
"light electrodynamics",
"quantum electrodynamics",
"iron electrodynamics"
] |
C
|
Relavent Documents:
Document 0:::
The study of electromagnetism in higher education, as a fundamental part of both physics and engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John David Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. Also at an undergraduate level, Richard Feynman's classic The Feynman Lectures on Physics is available online to read for free.
Undergraduate
There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. The Feynman Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics.
Graduate
A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John David Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. In a 2015 review of Andrew Zangwill's Modern Electrodynamics in
Document 1:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 2:::
Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law and Lorentz transformations.
Electromechanics
After Maxwell proposed the differential equation model of the electromagnetic field in 1873, the mechanism of action of fields came into question, for instance in the Kelvin’s master class held at Johns Hopkins University in 1884 and commemorated a century later.
The requirement that the equations remain consistent when viewed from various moving observers led to special relativity, a geometric theory of 4-space where intermediation is by light and radiation. The spacetime geometry provided a context for technical description of electric technology, especially generators, motors, and lighting at first. The Coulomb force was generalized to the Lorentz force. For example, with this model transmission lines and power grids were developed and radio frequency communication explored.
An effort to mount a full-fledged electromechanics on a relativistic basis is seen in the work of Leigh Page, from the project outline in 1912 to his textbook Electrodynamics (1940) The interplay (according to the differential equations) of electric and magnetic field as viewed over moving observers is examined. What is charge density in electrostatics becomes proper charge density and generates a magnetic field for a moving observer.
A revival of interest in this method for education and training of electrical and electronics engineers broke out in the 1960s after Richard Feynman’s textbook.
Rosser’s book Classical Electromagnetism via Relativity was popular, as was Anthony French’s treatment in his textbook which illustrated diagrammatically the proper charge density. One author proclaimed, "Maxwell — Out of Newton, Coulomb, and Einstein".
The use of retarded potentials to describe electromagnetic fields from source-charges is an expression of relativistic electromagnetism.
Principle
The question of how an electric field
Document 3:::
This article summarizes equations in the theory of electromagnetism.
Definitions
Here subscripts e and m are used to differ between electric and magnetic charges. The definitions for monopoles are of theoretical interest, although real magnetic dipoles can be described using pole strengths. There are two possible units for monopole strength, Wb (Weber) and A m (Ampere metre). Dimensional analysis shows that magnetic charges relate by qm(Wb) = μ0 qm(Am).
Initial quantities
Electric quantities
Contrary to the strong analogy between (classical) gravitation and electrostatics, there are no "centre of charge" or "centre of electrostatic attraction" analogues.
Electric transport
Electric fields
Magnetic quantities
Magnetic transport
Magnetic fields
Electric circuits
DC circuits, general definitions
AC circuits
Magnetic circuits
Electromagnetism
Electric fields
General Classical Equations
Magnetic fields and moments
General classical equations
Electric circuits and electronics
Below N = number of conductors or circuit components. Subcript net refers to the equivalent and resultant property value.
See also
Defining equation (physical chemistry)
Fresnel equations
List of equations in classical mechanics
List of equations in fluid mechanics
List of equations in gravitation
List of equations in nuclear and particle physics
List of equations in quantum mechanics
List of equations in wave theory
List of photonics equations
List of relativistic equations
SI electromagnetism units
Table of thermodynamic equations
Footnotes
Sources
Further reading
Physical quantities
SI units
Equations of physics
Electromagnetism
Document 4:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call the theory of electromagnetism on the particle scale?
A. gravity electrodynamics
B. light electrodynamics
C. quantum electrodynamics
D. iron electrodynamics
Answer:
|
|
sciq-7682
|
multiple_choice
|
What two forms can fluids take?
|
[
"liquid or gas",
"vapor or gas",
"mixture or gas",
"water or gas"
] |
A
|
Relavent Documents:
Document 0:::
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape.
The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids.
A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container.
Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars).
Introduction
Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid.
A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a containe
Document 1:::
Rheometry () generically refers to the experimental techniques used to determine the rheological properties of materials, that is the qualitative and quantitative relationships between stresses and strains and their derivatives. The techniques used are experimental. Rheometry investigates materials in relatively simple flows like steady shear flow, small amplitude oscillatory shear, and extensional flow.
The choice of the adequate experimental technique depends on the rheological property which has to be determined. This can be the steady shear viscosity, the linear viscoelastic properties (complex viscosity respectively elastic modulus), the elongational properties, etc.
For all real materials, the measured property will be a function of the flow conditions during which it is being measured (shear rate, frequency, etc.) even if for some materials this dependence is vanishingly low under given conditions (see Newtonian fluids).
Rheometry is a specific concern for smart fluids such as electrorheological fluids and magnetorheological fluids, as it is the primary method to quantify the useful properties of these materials.
Rheometry is considered useful in the fields of quality control, process control, and industrial process modelling, among others. For some, the techniques, particularly the qualitative rheological trends, can yield the classification of materials based on the main interactions between different possible elementary components and how they qualitatively affect the rheological behavior of the materials. Novel applications of these concepts include measuring cell mechanics in thin layers, especially in drug screening contexts.
Of non-Newtonian fluids
The viscosity of a non-Newtonian fluid is defined by a power law:
where η is the viscosity after shear is applied, η0 is the initial viscosity, γ is the shear rate, and if
, the fluid is shear thinning,
, the fluid is shear thickening,
, the fluid is Newtonian.
In rheometry, shear forces are applied t
Document 2:::
Binary liquid is a type of chemical combination, which creates a special reaction or feature as a result of mixing two liquid chemicals, that are normally inert or have no function by themselves. A number of chemical products are produced as a result of mixing two chemicals as a binary liquid, such as plastic foams and some explosives.
See also
Binary chemical weapon
Thermophoresis
Percus-Yevick equation
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In general relativity, a fluid solution is an exact solution of the Einstein field equation in which the gravitational field is produced entirely by the mass, momentum, and stress density of a fluid.
In astrophysics, fluid solutions are often employed as stellar models. (It might help to think of a perfect gas as a special case of a perfect fluid.) In cosmology, fluid solutions are often used as cosmological models.
Mathematical definition
The stress–energy tensor of a relativistic fluid can be written in the form
Here
the world lines of the fluid elements are the integral curves of the velocity vector ,
the projection tensor projects other tensors onto hyperplane elements orthogonal to ,
the matter density is given by the scalar function ,
the pressure is given by the scalar function ,
the heat flux vector is given by ,
the viscous shear tensor is given by .
The heat flux vector and viscous shear tensor are transverse to the world lines, in the sense that
This means that they are effectively three-dimensional quantities, and since the viscous stress tensor is symmetric and traceless, they have respectively three and five linearly independent components. Together with the density and pressure, this makes a total of 10 linearly independent components, which is the number of linearly independent components in a four-dimensional symmetric rank two tensor.
Special cases
Several special cases of fluid solutions are noteworthy (here speed of light c = 1):
A perfect fluid has vanishing viscous shear and vanishing heat flux:
A dust is a pressureless perfect fluid:
A radiation fluid is a perfect fluid with :
The last two are often used as cosmological models for (respectively) matter-dominated and radiation-dominated epochs. Notice that while in general it requires ten functions to specify a fluid, a perfect fluid requires only two, and dusts and radiation fluids each require only one function. It is much easier to find such solutions than it is to find
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two forms can fluids take?
A. liquid or gas
B. vapor or gas
C. mixture or gas
D. water or gas
Answer:
|
|
sciq-1295
|
multiple_choice
|
What happens to an animal's telomeres as it ages?
|
[
"lengthen",
"multiply",
"shorten",
"divide"
] |
C
|
Relavent Documents:
Document 0:::
Eternal youth is the concept of human physical immortality free of ageing. The youth referred to is usually meant to be in contrast to the depredations of aging, rather than a specific age of the human lifespan. Eternal youth is common in mythology, and is a popular theme in fiction.
Religion and mythology
Eternal youth is a characteristic of the inhabitants of Paradise in Abrahamic religions.
The Hindus believe that the Vedic and the post-Vedic rishis have attained immortality, which implies the ability to change one's body's age or even shape at will. These are some of the siddhas in Yoga. Markandeya is said to always stay at the age of 16.
The difference between eternal life and the more specific eternal youth is a recurrent theme in Greek and Roman mythology. The mytheme of requesting the boon of immortality from a god, but forgetting to ask for eternal youth appears in the story of Tithonus. A similar theme is found in Ovid regarding the Cumaean Sibyl.
In Norse mythology, Iðunn is described as providing the gods apples that grant them eternal youthfulness in the 13th-century Prose Edda.
Telomeres
An individual's DNA plays a role in the aging process. Aging begins even before birth, as soon as cells start to die and need to be replaced. On the ends of each chromosome are repetitive sequences of DNA, telomeres, that protect the chromosome from joining with other chromosomes, and have several key roles. One of these roles is to regulate cell division by allowing each cell division to remove a small amount of genetic code. The amount removed varies by the cell type being replicated. The gradual degradation of the telomeres restricts cell division to 40-60 times, also known as the Hayflick limit. Once this limit has been reached, more cells die than can be replaced in the same time span. Thus, soon after this limit is reached the organism dies. The importance of telomeres is now clearly evident: lengthen the telomeres, lengthen the life.
However, a study of th
Document 1:::
The Hayflick limit, or Hayflick phenomenon, is the number of times a normal somatic, differentiated human cell population will divide before cell division stops. However, this limit does not apply to stem cells.
The concept of the Hayflick limit was advanced by American anatomist Leonard Hayflick in 1961, at the Wistar Institute in Philadelphia, Pennsylvania. Hayflick demonstrated that a normal human fetal cell population will divide between 40 and 60 times in cell culture before entering a senescence phase. This finding refuted the contention by Alexis Carrel that normal cells are immortal.
Each time a cell undergoes mitosis, the telomeres on the ends of each chromosome shorten slightly. Cell division will cease once telomeres shorten to a critical length. Hayflick interpreted his discovery to be aging at the cellular level. The aging of cell populations appears to correlate with the overall physical aging of an organism.
Macfarlane Burnet coined the name "Hayflick limit" in his book Intrinsic Mutagenesis: A Genetic Approach to Ageing, published in 1974.
History
The belief in cell immortality
Prior to Leonard Hayflick's discovery, it was believed that vertebrate cells had an unlimited potential to replicate. Alexis Carrel, a Nobel prize-winning surgeon, had stated "that all cells explanted in tissue culture are immortal, and that the lack of continuous cell replication was due to ignorance on how best to cultivate the cells". He claimed to have cultivated fibroblasts from the hearts of chickens (which typically live 5 to 10 years) and to have kept the culture growing for 34 years.
However, other scientists have been unable to replicate Carrel's results, and they are suspected to be due to an error in experimental procedure. To provide required nutrients, embryonic stem cells of chickens may have been re-added to the culture daily. This would have easily allowed the cultivation of new, fresh cells in the culture, so there was not an infinite reproduction of
Document 2:::
The Dog Aging Project is a long-term biological study of aging in dogs, centered at the University of Washington. Professors Daniel Promislow and Matt Kaeberlein are the co-directors of the project. Together with Chief Veterinarian, Dr. Kate Creevy, the project primarily focuses on research to understand dog aging through the collection and analysis of big data through citizen science.
Additionally, there is a small component of the project that explores the use of pharmaceuticals to potentially increase life span of dogs. The project has implications for improving the life spans of humans and is an example of geroscience.
The project engages the general public to register their dogs in the studies, and therefore the project is an example of citizen science. nearly 40,000 dogs have been registered with the project. The majority of the dogs will participate in a longitudinal study of 10,000 dogs over a 10-year period conducted across the United States. Individual dogs are followed for the duration of their lives to understand the biological and environmental factors that influence dog longevity. A small subset of those dogs (approximately 500) will be enrolled in a double-blind, placebo-controlled study of the pharmaceutical rapamycin, which has shown signs of extending longevity in species such as mice.
The Dog Aging Project is an open science initiative. The investigators have committed to releasing all anonymized research data to the public domain. The longitudinal study portion of the Dog Aging Project bears some similarity to the Golden Retriever Lifetime Study of the Morris Animal Foundation although with much larger phenotypic diversity. The entire project also shares operational similarities to Darwin's Ark, a citizen science initiative of companion animals with more specific focus on genetics. The initiatives are each managed to ensure the data can be integrated into a powerful master data set.
A premise of the project is that dogs may be a sentine
Document 3:::
The disposable soma theory of aging states that organisms age due to an evolutionary trade-off between growth, reproduction, and DNA repair maintenance. Formulated by Thomas Kirkwood, the disposable soma theory explains that an organism only has a limited amount of resources that it can allocate to its various cellular processes. Therefore, a greater investment in growth and reproduction would result in reduced investment in DNA repair maintenance, leading to increased cellular damage, shortened telomeres, accumulation of mutations, compromised stem cells, and ultimately, senescence. Although many models, both animal and human, have appeared to support this theory, parts of it are still controversial.
Specifically, while the evolutionary trade-off between growth and aging has been well established,
the relationship between reproduction and aging is still without scientific consensus, and the cellular mechanisms largely undiscovered.
Background and history
British biologist Thomas Kirkwood first proposed the disposable soma theory of aging in a 1977 Nature review article. The theory was inspired by Leslie Orgel's Error Catastrophe Theory of Aging, which was published fourteen years earlier, in 1963. Orgel believed that the process of aging arose due to mutations acquired during the replication process, and Kirkwood developed the disposable soma theory in order to mediate Orgel's work with evolutionary genetics.
Principles
The disposable soma theory of aging posits that there is a trade-off in resource allocation between somatic maintenance and reproductive investment. Too low an investment in self-repair would be evolutionarily unsound, as the organism would likely die before reproductive age. However, too high an investment in self-repair would also be evolutionarily unsound due to the fact that one's offspring would likely die before reproductive age. Therefore, there is a compromise and resources are partitioned accordingly. However, this compromise is thought
Document 4:::
A mega-telomere (also known as an ultra-long telomere or a class III telomere), is an extremely long telomere sequence that sits on the end of chromosomes and prevents the loss of genetic information during cell replication. Like regular telomeres, mega-telomeres are made of a repetitive sequence of DNA and associated proteins, and are located on the ends of chromosomes. However, mega-telomeres are substantially longer than regular telomeres, ranging in size from 50 kilobases to several megabases (for comparison, the normal length of vertebrate telomeres is usually between 10 and 20 kilobases).
Telomeres act like protective caps for the chromosome. During cell division, a cell will make copies of its DNA. The enzymes in the cell that are responsible for copying the DNA cannot copy the very ends of the chromosomes. This is sometimes called the "end replication problem". If a cell did not contain telomeres, genetic information from the DNA on the ends of chromosomes would be lost with each division. However, because chromosomes have telomeres or mega-telomeres on their ends, repetitive non-essential sequences of DNA are lost instead (See: Telomere shortening). While the chromosomes in most eukaryotic organisms are capped with telomeres, mega-telomeres are only found in a few species, such as mice and some birds. The specific function of mega-telomeres in vertebrate cells is still unclear.
Discovery
Telomeric regions of DNA were first identified in the late 1970s (See: Discovery of Telomeric DNA). However, extremely long regions of telomere sequence were not recognized in vertebrates until over a decade later. These sequences, which ranged from 30 to 150 kilobases in size, were first identified in laboratory mice by David Kipling and Howard Cooke in 1990.
In 1994, extremely long telomeric regions were identified in chickens. Telomeric sequences ranging from 20 kilobases to several megabases have also been identified in several species of birds. These large regions
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens to an animal's telomeres as it ages?
A. lengthen
B. multiply
C. shorten
D. divide
Answer:
|
|
sciq-8652
|
multiple_choice
|
Regulation of hormone production hormone levels are primarily controlled through negative feedback, in which rising levels of a hormone inhibit its?
|
[
"recent release",
"Limited release",
"further release",
"particular release"
] |
C
|
Relavent Documents:
Document 0:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 1:::
Pulsatile secretion is a biochemical phenomenon observed in a wide variety of cell and tissue types, in which chemical products are secreted in a regular temporal pattern. The most common cellular products observed to be released in this manner are intercellular signaling molecules such as hormones or neurotransmitters. Examples of hormones that are secreted pulsatilely include insulin, thyrotropin, TRH, gonadotropin-releasing hormone (GnRH) and growth hormone (GH). In the nervous system, pulsatility is observed in oscillatory activity from central pattern generators. In the heart, pacemakers are able to work and secrete in a pulsatile manner. A pulsatile secretion pattern is critical to the function of many hormones in order to maintain the delicate homeostatic balance necessary for essential life processes, such as development and reproduction. Variations of the concentration in a certain frequency can be critical to hormone function, as evidenced by the case of GnRH agonists, which cause functional inhibition of the receptor for GnRH due to profound downregulation in response to constant (tonic) stimulation. Pulsatility may function to sensitize target tissues to the hormone of interest and upregulate receptors, leading to improved responses. This heightened response may have served to improve the animal's fitness in its environment and promote its evolutionary retention.
Pulsatile secretion in its various forms is observed in:
Hypothalamic-pituitary-gonadal axis (HPG) related hormones
Glucocorticoids
Insulin
Growth hormone
Parathyroid hormone
Neuroendocrine Pulsatility
Nervous system control over hormone release is based in the hypothalamus, from which the neurons that populate the pariventricular and arcuate nuclei originate. These neurons project to the median eminence, where they secrete releasing hormones into the hypophysial portal system connecting the hypothalamus with the pituitary gland. There, they dictate endocrine function via the four Hyp
Document 2:::
In molecular biology, the crustacean neurohormone family of proteins is a family of neuropeptides expressed by arthropods. The family includes the following types of neurohormones:
Crustacean hyperglycaemic hormone (CHH). CHH is primarily involved in blood sugar regulation, but also plays a role in the control of moulting and reproduction.
Moult-inhibiting hormone (MIH). MIH inhibits Y-organs where moulting hormone (ecdysteroid) is secreted. A moulting cycle is initiated when MIH secretion diminishes or stops.
Gonad-inhibiting hormone (GIH), also known as vitellogenesis-inhibiting hormone (VIH) because of its role in inhibiting vitellogenesis in female animals.
Mandibular organ-inhibiting hormone (MOIH). MOIH represses the synthesis of methyl farnesoate, the precursor of insect juvenile hormone III in the mandibular organ.
Ion transport peptide (ITP) from locust. ITP stimulates salt and water reabsorption and inhibits acid secretion in the ileum of the locust.
Caenorhabditis elegans uncharacterised protein ZC168.2.
These neurohormones are peptides of 70 to 80 amino acid residues which are processed from larger precursors. They contain six conserved cysteines that are involved in disulfide bonds.
Document 3:::
Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning, at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs, environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals.
Document 4:::
An ectopic hormone is a hormone produced by tumors derived from tissue that is not typically associated with its production.
On the other hand, the term entopic is used to refer to hormones produced by tissue in tumors that are normally engaged in the production of that hormone.
The excess hormone secretion is considered detrimental to the normal body homeostasis. This hormone production typically results in a set of signs and symptoms that are called a paraneoplastic syndrome.
Some clinical syndromes caused by ectopic hormone production include:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Regulation of hormone production hormone levels are primarily controlled through negative feedback, in which rising levels of a hormone inhibit its?
A. recent release
B. Limited release
C. further release
D. particular release
Answer:
|
|
ai2_arc-82
|
multiple_choice
|
Which of these is never found in prokaryotic cells?
|
[
"cell membrane",
"ribosome",
"cell wall",
"nucleus"
] |
D
|
Relavent Documents:
Document 0:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization.
Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments.
It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built.
Types
In general there are 4 main cellular compartments, they are:
The nuclear compartment comprising the nucleus
The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope)
Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes)
The cytosol
Function
Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of these is never found in prokaryotic cells?
A. cell membrane
B. ribosome
C. cell wall
D. nucleus
Answer:
|
|
sciq-5260
|
multiple_choice
|
What molecule consists of two atoms of hydrogen and one atom of oxygen?
|
[
"hydrogen peroxide",
"water",
"carbon monoxide",
"air"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 1:::
Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2.
In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element.
Classifications
Based on atomicity, molecules can be classified as:
Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic.
Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic.
Triatomic (composed of three atoms). Examples include O3 (ozone).
Polyatomic (composed of three or more atoms). Examples include S8.
Atomicity may vary in different allotropes of the same element.
The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1.
The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2).
Examples
The most common values of atomicity for the first 30 elements in the periodic table are as follows:
Document 2:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 3:::
Nitric oxide (nitrogen oxide or nitrogen monoxide) is a colorless gas with the formula . It is one of the principal oxides of nitrogen. Nitric oxide is a free radical: it has an unpaired electron, which is sometimes denoted by a dot in its chemical formula (•N=O or •NO). Nitric oxide is also a heteronuclear diatomic molecule, a class of molecules whose study spawned early modern theories of chemical bonding.
An important intermediate in industrial chemistry, nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. It was proclaimed the "Molecule of the Year" in 1992. The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule.
Nitric oxide should not be confused with nitrogen dioxide (NO2), a brown gas and major air pollutant, or with nitrous oxide (N2O), an anesthetic gas.
Physical properties
Electronic configuration
The ground state electronic configuration of NO is, in united atom notation:
The first two orbitals are actually pure atomic 1sO and 1sN from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron.
The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J= or J=.
Dipole
The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transf
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What molecule consists of two atoms of hydrogen and one atom of oxygen?
A. hydrogen peroxide
B. water
C. carbon monoxide
D. air
Answer:
|
|
sciq-9574
|
multiple_choice
|
How many directions can ions flow along the axon?
|
[
"six",
"one",
"two",
"three"
] |
B
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 1:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many directions can ions flow along the axon?
A. six
B. one
C. two
D. three
Answer:
|
|
sciq-6921
|
multiple_choice
|
Adding carbon to iron makes what type of metal?
|
[
"plastic",
"titanium",
"ions",
"steel"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Adding carbon to iron makes what type of metal?
A. plastic
B. titanium
C. ions
D. steel
Answer:
|
|
sciq-11653
|
multiple_choice
|
What are magnesium carbonate, aluminum hydroxide, and sodium bicarbonate commonly used as?
|
[
"salts",
"antacids",
"antidepressants",
"antibiotics"
] |
B
|
Relavent Documents:
Document 0:::
Minor salts (micronutrients) per litre
Boric acid (H3BO3) 6. 2 mg/l
Cobalt chloride (CoCl2 · 6H2O) 0.025 mg/l
Ferrous sulfate (FeSO4 · 7H2O) 27.8 mg/l
Manganese(II) sulfate (MnSO4 · 4H2O) 22.3 mg/l
Potassium iodide (KI) 0.83 mg/l
Sodium molybdate (Na2MoO4 · 2H2O) 0.25 mg/l
Zinc sulfate (ZnSO4·7H2O) 8.6 mg/l
Ethylenediaminetetraacetic acid ferric sodium (FeNaEDTA) 36.70 mg/L
Copper sulfate (CuSO4 · 5H2O) 0.025 mg/l
Vitamins and organic compounds per litre
Myo-Inositol 100 mg/l
Nicotini
Document 1:::
Magnesium hydroxide is the inorganic compound with the chemical formula Mg(OH)2. It occurs in nature as the mineral brucite. It is a white solid with low solubility in water (). Magnesium hydroxide is a common component of antacids, such as milk of magnesia.
Preparation
Treating the solution of different soluble magnesium salts with alkaline water induces the precipitation of the solid hydroxide Mg(OH)2:
Mg2+ + 2 OH− → Mg(OH)2
As is the second most abundant cation present in seawater after , it can be economically extracted directly from seawater by alkalinisation as described here above. On an industrial scale, Mg(OH)2 is produced by treating seawater with lime (Ca(OH)2). A volume of of seawater gives about one tonne of Mg(OH)2. Ca(OH)2 ) is far more soluble than Mg(OH)2 ) and drastically increases the pH value of seawater from 8.2 to 12.5. The less soluble precipitates because of the common ion effect due to the added by the dissolution of :
Uses
Precursor to MgO
Most Mg(OH)2 that is produced industrially, as well as the small amount that is mined, is converted to fused magnesia (MgO). Magnesia is valuable because it is both a poor electrical conductor and an excellent thermal conductor.
Medical
Only a small amount of the magnesium from magnesium hydroxide is usually absorbed by the intestine (unless one is deficient in magnesium). However, magnesium is mainly excreted by the kidneys; so long-term, daily consumption of milk of magnesia by someone suffering from kidney failure could lead in theory to hypermagnesemia. Unabsorbed magnesium is excreted in feces; absorbed magnesium is rapidly excreted in urine.
Applications
Antacid
As an antacid, magnesium hydroxide is dosed at approximately 0.5–1.5g in adults and works by simple neutralization, in which the hydroxide ions from the Mg(OH)2 combine with acidic H+ ions (or hydronium ions) produced in the form of hydrochloric acid by parietal cells in the stomach, to produce water.
Laxative
As a laxative,
Document 2:::
Azodicarbonamide, ADCA, ADA, or azo(bis)formamide, is a chemical compound with the molecular formula . It is a yellow to orange-red, odorless, crystalline powder. It is sometimes called a 'yoga mat' chemical because of its widespread use in foamed plastics. It was first described by John Bryden in 1959.
Synthesis
It is prepared in two steps via treatment of urea with hydrazine to form biurea, as described in this idealized equation:
Oxidation with chlorine or chromic acid yields azodicarbonamide:
Applications
Blowing agent
The principal use of azodicarbonamide is in the production of foamed plastics as a blowing agent. The thermal decomposition of azodicarbonamide produces nitrogen, carbon monoxide, carbon dioxide, and ammonia gases, which are trapped in the polymer as bubbles to form a foamed article.
Azodicarbonamide is used in plastics, synthetic leather, and other industries and can be pure or modified. Modification affects the reaction temperatures. Pure azodicarbonamide generally reacts around 200 °C. In the plastic, leather, and other industries, modified azodicarbonamide (average decomposition temperature 170 °C) contains additives that accelerate the reaction or react at lower temperatures.
An example of the use of azodicarbonamide as a blowing agent is found in the manufacture of vinyl (PVC) and EVA-PE foams, where it forms bubbles upon breaking down into gas at high temperature. Vinyl foam is springy and does not slip on smooth surfaces. It is useful for carpet underlay and floor mats. Commercial yoga mats made of vinyl foam have been available since the 1980s; the first mats were cut from carpet underlay.
Food additive
As a food additive, azodicarbonamide is used as a flour bleaching agent and a dough conditioner. It reacts with moist flour as an oxidizing agent. The main reaction product is biurea, which is stable during baking. Secondary reaction products include semicarbazide and ethyl carbamate. It is known by the E number E927. Many restauran
Document 3:::
McIlvaine buffer is a buffer solution composed of citric acid and disodium hydrogen phosphate, also known as citrate-phosphate buffer. It was introduced in 1921 by the United States agronomist Theodore Clinton McIlvaine (1875–1959) from West Virginia University, and it can be prepared in pH 2.2 to 8 by mixing two stock solutions.
Applications
McIlvaine buffer can be used to prepare a water-soluble mounting medium when mixed 1:1 with glycerol.
Contents
Preparation of McIlvaine buffer requires disodium phosphate and citric acid. One liter of 0.2M stock solution of disodium hydroxyphosphate can be prepared by dissolving 28.38g of disodium phosphate in water, and adding a quantity of water sufficient to make one liter. One liter of 0.1M stock solution of citric acid can be prepared by dissolving 19.21g of citric acid in water, and adding a quantity of water sufficient to make one liter. From these stock solutions, McIlvaine buffer can be prepared in accordance with the following table:
Document 4:::
Magnesium oil (also referred to as transdermal magnesium, magnesium hexahydrate) is a compound of magnesium chloride dissolved in six molecules of water, with magnesium as the alkaline earth metal and chlorine as the nonmetal. In reality, it is not a "true" oil, as it is not composed of one or more hydrocarbons. Magnesium oil is actually magnesium chloride hexahydrate . Magnesium oil can be applied to the skin as an alternative to taking a magnesium supplement by mouth, and it is claimed to have health benefits, such as for the treatment of magnesium deficiency, to relieve muscle pain and ache (especially headaches), and to enhance relaxation. However, such use has been described as "scientifically unsupported" due to lack of any convincing data that magnesium is absorbed in significant amounts through the skin. It can also be found as a spray for the mentioned purposes. Magnesium is used in over 600 cellular reactions within the human body, including the immune system. Magnesium oil, with a chemical formula of has a formula mass of 203.30 g/mol.
Synthesis
When magnesium (Mg) reacts with one molecule of chlorine (), the magnesium chloride salt is forming. The electron deficient magnesium has a potential for further reactions to become stable. Dissolving this chemical, magnesium chloride (), in six molecules of water () results in the successful synthesis of "magnesium oil." The formation of magnesium oil, is depicted below:
Process of isolation
In a synthesis process known as the Dow process, magnesium chloride () is most commonly extracted from sea water by precipitating the molecule as magnesium hydroxide , followed by its conversion to the with the addition of hydrochloric acid (HCl(aq)). The solid-solid separation of from NaCl is accompanied by the usage of organic solvents such as tetrachloromethane or iodomethane, or the combination of these two organic solvents.
Past applications
Transdermal drug absorption has been part of human history for centurie
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are magnesium carbonate, aluminum hydroxide, and sodium bicarbonate commonly used as?
A. salts
B. antacids
C. antidepressants
D. antibiotics
Answer:
|
|
sciq-7186
|
multiple_choice
|
What type of cell division produces gametes?
|
[
"meiosis",
"apoptosis",
"mutations",
"mitosis"
] |
A
|
Relavent Documents:
Document 0:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 1:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 2:::
In cellular biology, a somatic cell (), or vegetal cell, is any biological cell forming the body of a multicellular organism other than a gamete, germ cell, gametocyte or undifferentiated stem cell. Somatic cells compose the body of an organism and divide through the process of binary fission and mitotic division.
In contrast, gametes are cells that fuse during sexual reproduction and germ cells are cells that give rise to gametes. Stem cells also can divide through mitosis, but are different from somatic in that they differentiate into diverse specialized cell types.
In mammals, somatic cells make up all the internal organs, skin, bones, blood and connective tissue, while mammalian germ cells give rise to spermatozoa and ova which fuse during fertilization to produce a cell called a zygote, which divides and differentiates into the cells of an embryo. There are approximately 220 types of somatic cell in the human body.
Theoretically, these cells are not germ cells (the source of gametes); they transmit their mutations, to their cellular descendants (if they have any), but not to the organism's descendants. However, in sponges, non-differentiated somatic cells form the germ line and, in Cnidaria, differentiated somatic cells are the source of the germline. Mitotic cell division is only seen in diploid somatic cells. Only some cells like germ cells take part in reproduction.
Evolution
As multicellularity was theorized to be evolved many times, so did sterile somatic cells. The evolution of an immortal germline producing specialized somatic cells involved the emergence of mortality, and can be viewed in its simplest version in volvocine algae. Those species with a separation between sterile somatic cells and a germline are called Weismannists. Weismannist development is relatively rare (e.g., vertebrates, arthropods, Volvox), as many species have the capacity for somatic embryogenesis (e.g., land plants, most algae, and numerous invertebrates).
Genetics and chrom
Document 3:::
Germ-Soma Differentiation is the process by which organisms develop distinct germline and somatic cells. The development of cell differentiation has been one of the critical aspects of the evolution of multicellularity and sexual reproduction in organisms. Multicellularity has evolved upwards of 25 times, and due to this there is great possibility that multiple factors have shaped the differentiation of cells. There are three general types of cells: germ cells, somatic cells, and stem cells. Germ cells lead to the production of gametes, while somatic cells perform all other functions within the body. Within the broad category of somatic cells, there is further specialization as cells become specified to certain tissues and functions. In addition, stem cell are undifferentiated cells which can develop into a specialized cell and are the earliest type of cell in a cell lineage. Due to the differentiation in function, somatic cells are found ony in multicellular organisms, as in unicellular ones the purposes of somatic and germ cells are consolidated in one cell.
All organisms with germ-soma differentiation are eukaryotic, and represent an added level of specialization to multicellular organisms. Pure germ-soma differentiation has developed in a select number of eukaryotes (called Weismannists), included in this category are vertebrates and arthropods- however land plants, green algae, red algae, brown algae, and fungi have partial differentiation. While a significant portion of organisms with germ-soma differentiation are asexual, this distinction has been imperative in the development of sexual reproduction; the specialization of certain cells into germ cells is fundamental for meiosis and recombination.
Weismann barrier
The strict division between somatic and germ cells is called the Weismann barrier, in which genetic information passed onto offspring is found only in germ cells. This occurs only in select organisms, however some without a Weismann barrier do pre
Document 4:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of cell division produces gametes?
A. meiosis
B. apoptosis
C. mutations
D. mitosis
Answer:
|
|
sciq-2735
|
multiple_choice
|
Like the marketplace, the metabolic economy is regulated by what basic principle?
|
[
"supply and demand",
"price and demand",
"industrial and demand",
"jobs and demand"
] |
A
|
Relavent Documents:
Document 0:::
Bioeconomics is closely related to the early development of theories in fisheries economics, initially in the mid-1950s by Canadian economists Scott Gordon (in 1954) and Anthony Scott (1955). Their ideas used recent achievements in biological fisheries modelling, primarily the works by Schaefer in 1954 and 1957 on establishing a formal relationship between fishing activities and biological growth through mathematical modelling confirmed by empirical studies, and also relates itself to ecology and the environment and resource protection.
These ideas developed out of the multidisciplinary fisheries science environment in Canada at the time. Fisheries science and modelling developed rapidly during a productive and innovative period, particularly among Canadian fisheries researchers of various disciplines. Population modelling and fishing mortality were introduced to economists, and new interdisciplinary modelling tools became available for the economists, which made it possible to evaluate biological and economic impacts of different fishing activities and fisheries management decisions.
See also
EconMult
Economics of biodiversity
Ecological economics
Georgescu-Roegen's bioeconomics
Green economics
List of harvested aquatic animals by weight
Notes
Document 1:::
The Fei–Ranis model of economic growth is a dualism model in developmental economics or welfare economics that has been developed by John C. H. Fei and Gustav Ranis and can be understood as an extension of the Lewis model. It is also known as the Surplus Labor model. It recognizes the presence of a dual economy comprising both the modern and the primitive sector and takes the economic situation of unemployment and underemployment of resources into account, unlike many other growth models that consider underdeveloped countries to be homogenous in nature. According to this theory, the primitive sector consists of the existing agricultural sector in the economy, and the modern sector is the rapidly emerging but small industrial sector. Both the sectors co-exist in the economy, wherein lies the crux of the development problem. Development can be brought about only by a complete shift in the focal point of progress from the agricultural to the industrial economy, such that there is augmentation of industrial output. This is done by transfer of labor from the agricultural sector to the industrial one, showing that underdeveloped countries do not suffer from constraints of labor supply. At the same time, growth in the agricultural sector must not be negligible and its output should be sufficient to support the whole economy with food and raw materials. Like in the Harrod–Domar model, saving and investment become the driving forces when it comes to economic development of underdeveloped countries.
Basics of the model
One of the biggest drawbacks of the Lewis model was the undermining of the role of agriculture in boosting the growth of the industrial sector. In addition to that, he did not acknowledge that the increase in productivity of labor should take place prior to the labor shift between the two sectors. However, these two ideas were taken into account in the Fei–Ranis dual economy model of three growth stages. They further argue that the model lacks in the proper
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
Biobased economy, bioeconomy or biotechonomy is economic activity involving the use of biotechnology and biomass in the production of goods, services, or energy. The terms are widely used by regional development agencies, national and international organizations, and biotechnology companies. They are closely linked to the evolution of the biotechnology industry and the capacity to study, understand, and manipulate genetic material that has been possible due to scientific research and technological development. This includes the application of scientific and technological developments to agriculture, health, chemical, and energy industries. The terms bioeconomy (BE) and bio-based economy (BBE) are sometimes used interchangeably. However, it is worth to distinguish them: the biobased economy takes into consideration the production of non-food goods, whilst bioeconomy covers both bio-based economy and the production and use of food and feed. More than 60 countries and regions have bioeconomy or bioscience-related strategies, of which 20 have published dedicated bioeconomy strategies in Africa, Asia, Europe, Oceania, and the Americas.
Definitions
Bioeconomy has large variety of definitions. The bioeconomy comprises those parts of the economy that use renewable biological resources from land and sea – such as crops, forests, fish, animals and micro-organisms – to produce food, health, materials, products, textiles and energy. The definitions and usage does however vary between different areas of the world.
An important aspect of the bioeconomy is understanding mechanisms and processes at the genetic, molecular, and genomic levels, and applying this understanding to creating or improving industrial processes, developing new products and services, and producing new energy. Bioeconomy aims to reduce our dependence on fossil natural resources, to prevent biodiversity loss and to create new economic growth and jobs that are in line with the principles of sustainable develo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Like the marketplace, the metabolic economy is regulated by what basic principle?
A. supply and demand
B. price and demand
C. industrial and demand
D. jobs and demand
Answer:
|
|
sciq-10292
|
multiple_choice
|
Peroxisomes perform a couple of different functions, including lipid metabolism and chemical detoxification. in contrast to the digestive enzymes found in lysosomes, the enzymes within peroxisomes serve to transfer hydrogen atoms from various molecules to oxygen, producing what?
|
[
"hydrogen peroxide",
"calcium",
"hydrogen",
"water"
] |
A
|
Relavent Documents:
Document 0:::
Eosinophil peroxidase is an enzyme found within the eosinophil granulocytes, innate immune cells of humans and mammals. This oxidoreductase protein is encoded by the gene EPX, expressed within these myeloid cells. EPO shares many similarities with its orthologous peroxidases, myeloperoxidase (MPO), lactoperoxidase (LPO), and thyroid peroxidase (TPO). The protein is concentrated in secretory granules within eosinophils. Eosinophil peroxidase is a heme peroxidase, its activities including the oxidation of halide ions to bacteriocidal reactive oxygen species, the cationic disruption of bacterial cell walls, and the post-translational modification of protein amino acid residues.
The major function of eosinophil peroxidase is to catalyze the formation of hypohalous acids from hydrogen peroxide and halide ions in solution. For example:
H2O2 + Br− → HOBr + H2O
Hypohalous acids formed from halides or pseudohalides are potent oxidizing agents. However, the role of eosinophilic peroxidase seems to be to generate hyphalous acids largely from bromide and iodide rather than chloride, since the former are favored greatly over the latter. The enzyme myeloperoxidase is responsible for formation of most of the hypochlorous acid in the body, and eosinophil peroxidase is responsible for reactions involving bromide and iodide.
Gene
The open reading frame of human eosinophil peroxidase was found to have a length of 2,106 base pairs (bp). This comprises a 381-bp prosequence, a 333-bp sequence encoding the light chain and a 1,392-bp sequence encoding the heavy chain. In addition to these there is a 452-bp untranslated region at the 3' end containing the AATAAA polyadenylation signal.
The promoter sequence for human eosinophil peroxidase is an unusually strong promoter. All the major regulatory elements are located within 100 bp upstream of the gene.
The profile of EPX expression has been characterized and is available online via BioGPS. This dataset indicates that both in humans
Document 1:::
Animal heme-dependent peroxidases is a family of peroxidases. Peroxidases are found in bacteria, fungi, plants and animals. On the basis of sequence similarity, a number of animal heme peroxidases can be categorized as members of a superfamily: myeloperoxidase (MPO); eosinophil peroxidase (EPO); lactoperoxidase (LPO); thyroid peroxidase (TPO); prostaglandin H synthase (PGHS); and peroxidasin.
Function
Myeloperoxidase (MPO) plays a major role in the oxygen-dependent microbicidal system of neutrophils. EPO from eosinophilic granulocytes participates in immunological reactions, and potentiates tumor necrosis factor (TNF) production and hydrogen peroxide release by human monocyte-derived macrophages. MPO (and possibly EPO) primarily use Cl−ions and H2O2 to form hypochlorous acid (HOCl), which can effectively kill bacteria or parasites. In secreted fluids, LPO catalyses the oxidation of thiocyanate ions (SCN−) by H2O2, producing the weak oxidizing agent hypothiocyanite (OSCN−), which has bacteriostatic activity. TPO uses I− ions and H2O2 to generate iodine, and plays a central role in the biosynthesis of thyroid hormones T3 and T4. Myeloperoxidase (), for example, resides in the human nucleus and lysosome and acts as a defense response to oxidative stress, preventing apoptosis of the cell.
Document 2:::
Types
As indicated in the following Biochemistry section, there are 4 types of chemically distinct eoxins that are made serially from the 15-lipoxygenase metabolite of arachidonic
Document 3:::
Classification
Oxidoreductases are classified as EC 1 in the EC number classification of enzymes. Oxidoreductases can be further classified into 21 subclasses:
EC 1.1 includes oxidoreductases that act on the CH-OH group of donors (alcohol oxidoreductases such as methanol dehydrogenase)
EC 1.2 includes oxidoreductases that act on the aldehyde or oxo group of donors
EC 1.3 includes oxidoreductases that act on the CH-CH group of donors (CH-CH oxidore
Document 4:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Peroxisomes perform a couple of different functions, including lipid metabolism and chemical detoxification. in contrast to the digestive enzymes found in lysosomes, the enzymes within peroxisomes serve to transfer hydrogen atoms from various molecules to oxygen, producing what?
A. hydrogen peroxide
B. calcium
C. hydrogen
D. water
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.