id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-11278
|
multiple_choice
|
What controls the shape of a developing zygote early in its development?
|
[
"chromosomes",
"DNA",
"gap genes",
"storage genes"
] |
C
|
Relavent Documents:
Document 0:::
Cytoplasmic determinants are special molecules which play a very important role during oocyte maturation, in the female's ovary. During this period of time, some regions of the cytoplasm accumulate some of these cytoplasmic determinants, whose distribution is thus very heterogenic. They play a major role in the development of the embryo's organs. Each type of cell is determined by a particular determinant or group of determinants. Thus, all the organs of the future embryo are distributed and operating well thanks to the right position of the cytoplasmic determinants. The action of the determinants on the blastomeres is one of the most important ones. During the segmentation, cytoplasmic determinants are distributed among the blastomeres, at different times depending on the species and on the type of determinant. Therefore, the daughter cells resulting from the first divisions are totipotent : they can, independently, lead to a complete individual. That is not possible after the cytoplasmic determinants have been distributed in the differentiated blastomeres.
Mosaic development
During the mosaic development, the future embryo contains all the distinct cytoplasmic determinants that are distributed in distinct cells. Regions of the organism differentiate very quickly if each cell contains specific cytoplasmic determinants since the first divisions : then the cell divides to give all the other cell of its type, and the same process happens in all types of cells in the organism.
As a result, in the case of the mosaic development, cell totipotence disappears very quickly during segmentation. Indeed, each new created cell determines a new region of the future organism, and it is independent from the other ones : thus development is independent from interaction between cells. It is most of all known in certain animals as nematodes C. elegans, or ascidians (marine animals).
Regulation development
Other animals show regulation development; their cells show totipotence for
Document 1:::
In the field of developmental biology, regional differentiation is the process by which different areas are identified in the development of the early embryo. The process by which the cells become specified differs between organisms.
Cell fate determination
In terms of developmental commitment, a cell can either be specified or it can be determined. Specification is the first stage in differentiation. A cell that is specified can have its commitment reversed while the determined state is irreversible. There are two main types of specification: autonomous and conditional. A cell specified autonomously will develop into a specific fate based upon cytoplasmic determinants with no regard to the environment the cell is in. A cell specified conditionally will develop into a specific fate based upon other surrounding cells or morphogen gradients. Another type of specification is syncytial specification, characteristic of most insect classes.
Specification in sea urchins uses both autonomous and conditional mechanisms to determine the anterior/posterior axis. The anterior/posterior axis lies along the animal/vegetal axis set up during cleavage. The micromeres induce the nearby tissue to become endoderm while the animal cells are specified to become ectoderm. The animal cells are not determined because the micromeres can induce the animal cells to also take on mesodermal and endodermal fates. It was observed that β-catenin was present in the nuclei at the vegetal pole of the blastula. Through a series of experiments, one study confirmed the role of β-catenin in the cell-autonomous specification of vegetal cell fates and the micromeres inducing ability. Treatments of lithium chloride sufficient to vegetalize the embryo resulted in increases in nuclearly localized b-catenin. Reduction of expression of β-catenin in the nucleus correlated with loss of vegetal cell fates. Transplants of micromeres lacking nuclear accumulation of β-catenin were unable to induce a second axis.
Document 2:::
Dysgenesis is an abnormal organ development during embryonic growth and development. As opposed to agenesis, which refers to the complete failure of an organ to develop, dysgenesis usually implies disordered development or malformation and in some cases represents the milder end of a spectrum of abnormalities. Dysgenesis occurs during fetal development immediately after conception.
Classification
One of the first organs that is affected is the brain, this is known as cerebral dysgenesis. Dysplasia is a form of dysgenesis in adults that alters the size and shape of their cells that lead to abnormal development. One of the most common forms of dysgenesis is within the gonads.
Examples:
Gonadal dysgenesis
Adrenal dysgenesis
Thyroid dysgenesis
Anterior segment dysgenesis
Document 3:::
Development of the human body is the process of growth to maturity. The process begins with fertilization, where an egg released from the ovary of a female is penetrated by a sperm cell from a male. The resulting zygote develops through mitosis and cell differentiation, and the resulting embryo then implants in the uterus, where the embryo continues development through a fetal stage until birth. Further growth and development continues after birth, and includes both physical and psychological development that is influenced by genetic, hormonal, environmental and other factors. This continues throughout life: through childhood and adolescence into adulthood.
Before birth
Development before birth, or prenatal development () is the process in which a zygote, and later an embryo, and then a fetus develops during gestation. Prenatal development starts with fertilization and the formation of the zygote, the first stage in embryonic development which continues in fetal development until birth.
Fertilization
Fertilization occurs when the sperm successfully enters the ovum's membrane. The chromosomes of the sperm are passed into the egg to form a unique genome. The egg becomes a zygote and the germinal stage of embryonic development begins. The germinal stage refers to the time from fertilization, through the development of the early embryo, up until implantation. The germinal stage is over at about 10 days of gestation.
The zygote contains a full complement of genetic material with all the biological characteristics of a single human being, and develops into the embryo. Embryonic development has four stages: the morula stage, the blastula stage, the gastrula stage, and the neurula stage. Prior to implantation, the embryo remains in a protein shell, the zona pellucida, and undergoes a series of rapid mitotic cell divisions called cleavage. A week after fertilization the embryo still has not grown in size, but hatches from the zona pellucida and adheres to the lining o
Document 4:::
In developmental biology, midblastula or midblastula transition (MBT) occurs during the blastula stage of embryonic development in non-mammals. During this stage, the embryo is referred to as a blastula. The series of changes to the blastula that characterize the midblastula transition include activation of zygotic gene transcription, slowing of the cell cycle, increased asynchrony in cell division, and an increase in cell motility.
Blastula Before MBT
Before the embryo undergoes the midblastula transition it is in a state of fast and constant replication of cells. The cell cycle is very short. The cells in the zygote are also replicating synchronously, always undergoing cell division at the same time. The zygote is not producing its own mRNA but rather it is using mRNAs that were produced in the mother and loaded into the oocyte in order to produce proteins necessary for zygotic growth. The zygotic DNA (genetic material) is not being used because it is repressed through a variety of mechanisms such as methylation. This repressed DNA is sometimes referred to as heterochromatin and is tightly packed together inside the cell because it is not being used for transcription.
Characteristics of the MBT
Before the zygote undergoes the midblastula transition it is in a state of fast and constant replication of cells.
Activation of Zygotic Gene Transcription
At this stage, the zygote starts producing its own mRNAs that are made from its own DNA, and no longer uses the maternal mRNA. This can also be called the maternal to zygotic transition. The maternal mRNAs are then degraded. Since the cells are now transcribing their own DNA, this stage is where expression of paternal genes is first observed.
Cell Cycle Changes
When the zygote begins to produce its own mRNA, the cell cycle begins to slow down and the G1 and G2 phases are added to the cell cycle. The addition of these phases allows the cell to have more time to proofread the new genetic material it is making to
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What controls the shape of a developing zygote early in its development?
A. chromosomes
B. DNA
C. gap genes
D. storage genes
Answer:
|
|
sciq-8950
|
multiple_choice
|
Heat expansion is a result of the increase of what type of energy, exhibited by molecules bumping together?
|
[
"radioactivity",
"harmonic energy",
"light energy",
"kinetic energy"
] |
D
|
Relavent Documents:
Document 0:::
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat
Document 1:::
Thermal expansion is the tendency of matter to change its shape, area, volume, and density in response to a change in temperature, usually not including phase transitions.
Temperature is a monotonic function of the average molecular kinetic energy of a substance. When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves. Substances which contract with increasing temperature are unusual, and only occur within limited temperature ranges (see examples below). The relative expansion (also called strain) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance.
Overview
Predicting expansion
If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions.
Contraction effects (negative thermal expansion)
A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather.
Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 kelvin. ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures.
Factors affecting thermal expansion
Unlike g
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers.
The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy.
Introduction
Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is
where is heat flux vector, is temporal change of internal energy ( is density, is specific heat capacity at constant pressure, is temperature and is time), and is the energy conversion to and from thermal energy ( and are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector is composed of three macroscopic fundamental modes, which are conduction (, : thermal conductivity), convection (, : velocity), and radiation (, : angular frequency, : polar angle, : spectral, directional radiation intensity, : unit vector), i.e., .
Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat
Document 4:::
Dielectric heating, also known as electronic heating, radio frequency heating, and high-frequency heating, is the process in which a radio frequency (RF) alternating electric field, or radio wave or microwave electromagnetic radiation heats a dielectric material. At higher frequencies, this heating is caused by molecular dipole rotation within the dielectric.
Mechanism
Molecular rotation occurs in materials containing polar molecules having an electrical dipole moment, with the consequence that they will align themselves in an electromagnetic field. If the field is oscillating, as it is in an electromagnetic wave or in a rapidly oscillating electric field, these molecules rotate continuously by aligning with it. This is called dipole rotation, or dipolar polarisation. As the field alternates, the molecules reverse direction. Rotating molecules push, pull, and collide with other molecules (through electrical forces), distributing the energy to adjacent molecules and atoms in the material. The process of energy transfer from the source to the sample is a form of radiative heating.
Temperature is related to the average kinetic energy (energy of motion) of the atoms or molecules in a material, so agitating the molecules in this way increases the temperature of the material. Thus, dipole rotation is a mechanism by which energy in the form of electromagnetic radiation can raise the temperature of an object. There are also many other mechanisms by which this conversion occurs.
Dipole rotation is the mechanism normally referred to as dielectric heating, and is most widely observable in the microwave oven where it operates most effectively on liquid water, and also, but much less so, on fats and sugars. This is because fats and sugar molecules are far less polar than water molecules, and thus less affected by the forces generated by the alternating electromagnetic fields. Outside of cooking, the effect can be used generally to heat solids, liquids, or gases, provided th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Heat expansion is a result of the increase of what type of energy, exhibited by molecules bumping together?
A. radioactivity
B. harmonic energy
C. light energy
D. kinetic energy
Answer:
|
|
sciq-9122
|
multiple_choice
|
The length of the sloped surface of a ramp is referred to as what?
|
[
"input distance",
"accumulation distance",
"furlong",
"output distance"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
Document 3:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
Document 4:::
The grade (also called slope, incline, gradient, mainfall, pitch or rise) of a physical feature, landform or constructed line refers to the tangent of the angle of that surface to the horizontal. It is a special case of the slope, where zero indicates horizontality. A larger number indicates higher or steeper degree of "tilt". Often slope is calculated as a ratio of "rise" to "run", or as a fraction ("rise over run") in which run is the horizontal distance (not the distance along the slope) and rise is the vertical distance.
Slopes of existing physical features such as canyons and hillsides, stream and river banks and beds are often described as grades, but typically grades are used for human-made surfaces such as roads, landscape grading, roof pitches, railroads, aqueducts, and pedestrian or bicycle routes. The grade may refer to the longitudinal slope or the perpendicular cross slope.
Nomenclature
There are several ways to express slope:
as an angle of inclination to the horizontal. (This is the angle opposite the "rise" side of a triangle with a right angle between vertical rise and horizontal run.)
as a percentage, the formula for which is which is equivalent to the tangent of the angle of inclination times 100. In Europe and the U.S. percentage "grade" is the most commonly used figure for describing slopes.
as a per mille figure (‰), the formula for which is which could also be expressed as the tangent of the angle of inclination times 1000. This is commonly used in Europe to denote the incline of a railway. It is sometimes written as mm/m instead of the ‰ symbol.
as a ratio of one part rise to so many parts run. For example, a slope that has a rise of 5 feet for every 1000 feet of run would have a slope ratio of 1 in 200. (The word "in" is normally used rather than the mathematical ratio notation of "1:200".) This is generally the method used to describe railway grades in Australia and the UK. It is used for roads in Hong Kong, and was used for roa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The length of the sloped surface of a ramp is referred to as what?
A. input distance
B. accumulation distance
C. furlong
D. output distance
Answer:
|
|
sciq-2629
|
multiple_choice
|
The earliest types of what lacked flowers, leaves, roots and stems?
|
[
"animals",
"plants",
"houses",
"clouds"
] |
B
|
Relavent Documents:
Document 0:::
Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi.
Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida.
There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so.
Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology.
Definition
Taxonomic history
All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d
Document 1:::
Plant life-form schemes constitute a way of classifying plants alternatively to the ordinary species-genus-family scientific classification. In colloquial speech, plants may be classified as trees, shrubs, herbs (forbs and graminoids), etc. The scientific use of life-form schemes emphasizes plant function in the ecosystem and that the same function or "adaptedness" to the environment may be achieved in a number of ways, i.e. plant species that are closely related phylogenetically may have widely different life-form, for example Adoxa moschatellina and Sambucus nigra are from the same family, but the former is a small herbaceous plant and the latter is a shrub or tree. Conversely, unrelated species may share a life-form through convergent evolution.
While taxonomic classification is concerned with the production of natural classifications (being natural understood either in philosophical basis for pre-evolutionary thinking, or phylogenetically as non-polyphyletic), plant life form classifications uses other criteria than naturalness, like morphology, physiology and ecology.
Life-form and growth-form are essentially synonymous concepts, despite attempts to restrict the meaning of growth-form to types differing in shoot architecture. Most life form schemes are concerned with vascular plants only. Plant construction types may be used in a broader sense to encompass planktophytes, benthophytes (mainly algae) and terrestrial plants.
A popular life-form scheme is the Raunkiær system.
History
One of the earliest attempts to classify the life-forms of plants and animals was made by Aristotle, whose writings are lost. His pupil, Theophrastus, in Historia Plantarum (c. 350 BC), was the first who formally recognized plant habits: trees, shrubs and herbs.
Some earlier authors (e.g., Humboldt, 1806) did classify species according to physiognomy, but were explicit about the entities being merely practical classes without any relation to plant function. A marked exception was
Document 2:::
Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb.
Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage.
Modified stems
Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers.
Detailed description of edible plant stems
Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the
Bamboo The edible portion is the young shoot (culm).
Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods.
Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves.
Cauliflower The edible portion is proliferated peduncle and flower tissue.
Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice.
Fig The edible portion is stem tissue. The
Document 3:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 4:::
Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. It is usually synonymous with the Flora and can be contrasted with the microflora, a term used for all the bacteria and other microorganisms in an ecosystem.
Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. This is in contrast to the flora, which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The earliest types of what lacked flowers, leaves, roots and stems?
A. animals
B. plants
C. houses
D. clouds
Answer:
|
|
sciq-6507
|
multiple_choice
|
When hit from behind in a car crash, a passenger can suffer a neck injury called what?
|
[
"twisted neck",
"whiplash",
"necklash",
"inflammation"
] |
B
|
Relavent Documents:
Document 0:::
The flail space model (FSM) is a model of how a car passenger moves in a vehicle that collides with a roadside feature such as a guardrail or a crash cushion. Its principal purpose is to assess the potential risk of harm to the hypothetical occupant as he or she impacts the interior of the passenger compartment and, ultimately, the efficacy of an experimental roadside feature undergoing full-scale vehicle crash testing.
The FSM eliminates the complexity and expense of using instrumented anthropometric dummies during the crash test experiments. Furthermore, while crash test dummies were developed to model collisions between vehicles, they are not accurate when used for the sorts of collision angles that occur when a vehicle collides with a roadside feature; by contrast, the FSM was designed for such collisions.
History
The FSM is based on research performed at Southwest Research Institute in 1980 and published in 1981 in the paper entitled "Collision Risk Assessment Based on Occupant Flail-Space Model" by Jarvis D. Michie. The FSM (coined by Michie) was accepted by the highway community and published as a key part of the "Recommended Procedures for the Safety Evaluation of Highway Appurtenances" published in 1981 in National Cooperative Highway Research Program (NCHRP) Report 230. In 1993, the NCHRP Report was updated and presented as NCHRP Report 350; in this research effort performed by the Texas Transportation Research Institute, the FSM was reexamined and was unmodified in the new publication. In 2004, Douglas Gabauer further examined the efficacy of the FSM in his PhD thesis. The American Association of State Highway and Transportation Officials (AASHTO) retained the FSM as the method of assessing the risk of harm to vehicle occupants in the 2009 "Manual for Assessing Safety Hardware" that replaced NCHRP Report 350, stating that the FSM had "served its intended purpose well".
Details
The FSM hypothesis divides the collision into two stages. In stage one, t
Document 1:::
The head injury criterion (HIC) is a measure of the likelihood of head injury arising from an impact. The HIC can be used to assess safety related to vehicles, personal protective gear, and sport equipment.
Normally the variable is derived from the measurements of an accelerometer mounted at the center of mass of a crash test dummy’s head, when the dummy is exposed to crash forces.
It is defined as:
where t1 and t2 are the initial and final times (in seconds) chosen to maximize HIC, and acceleration a is measured in gs (standard gravity acceleration). The time duration, t2 – t1, is limited to a maximum value of 36 ms, usually 15 ms.
This means that the HIC includes the effects of head acceleration and the duration of the acceleration. Large accelerations may be tolerated for very short times.
At a HIC of 1000, there is an 18% probability of a severe head injury, a 55% probability of a serious injury and a 90% probability of a moderate head injury to the average adult.
Automobile safety
HIC is used to determine the U.S. National Highway Traffic Safety Administration (NHTSA) star rating for automobile safety and to determine ratings given by the Insurance Institute for Highway Safety.
According to the Insurance Institute for Highway Safety, head injury risk is evaluated mainly on the basis of head injury criterion. A value of 700 is the maximum allowed under the provisions of the U.S. advanced airbag regulation (NHTSA, 2000) and is the maximum score for an "acceptable" IIHS rating for a particular vehicle.
A HIC-15 (meaning a measure of impact over 15 milliseconds) of 700 is estimated to represent a 5 percent risk of a severe injury (Mertz et al., 1997). A "severe" injury is one with a score of 4+ on the Abbreviated Injury Scale (AIS)
Data for specific vehicles can be found on various automotive review websites. Some sample data is as follows, for comparative purposes:
The 1998 Ford Windstar, marketed as one of the safest minivans of that year, test
Document 2:::
Blunt trauma, also known as blunt force trauma or non-penetrating trauma, describes a physical trauma due to a forceful impact without penetration of the body's surface. Blunt trauma stands in contrast with penetrating trauma, which occurs when an object pierces the skin, enters body tissue, and creates an open wound. Blunt trauma occurs due to direct physical trauma or impactful force to a body part. Such incidents often occur with road traffic collisions, assaults, sports-related injuries, and are notably common among the elderly who experience falls.
Blunt trauma can lead to a wide range of injuries including contusions, concussions, abrasions, lacerations, internal or external hemorrhages, and bone fractures. The severity of these injuries depends on factors such as the force of the impact, the area of the body affected, and underlying comorbidities of the affected individual. In some cases, blunt force trauma can be life-threatening and may require immediate medical attention. Blunt trauma to the head and/or severe blood loss are the most likely causes of death due to blunt force traumatic injury.
Classification
Blunt abdominal trauma
Blunt abdominal trauma (BAT) represents 75% of all blunt trauma and is the most common example of this injury. 75% of BAT occurs in motor vehicle crashes, in which rapid deceleration may propel the driver into the steering wheel, dashboard, or seatbelt, causing contusions in less serious cases, or rupture of internal organs from briefly increased intraluminal pressure in the more serious, depending on the force applied. Initially, there may be few indications that serious internal abdominal injury has occurred, making assessment more challenging and requiring a high degree of clinical suspicion.
There are two basic physical mechanisms at play with the potential of injury to intra-abdominal organs: compression and deceleration. The former occurs from a direct blow, such as a punch, or compression against a non-yielding object
Document 3:::
Spinal precautions, also known as spinal immobilization and spinal motion restriction, are efforts to prevent movement of the spine in those with a risk of a spine injury. This is done as an effort to prevent injury to the spinal cord. It is estimated that 2% of people with blunt trauma will have a spine injury.
Uses
Spinal immobilization was historically used routinely for people who had experienced physical trauma. There is; however, little evidence for its routine use. Long spine boards are often used in the prehospital environment as part of spinal immobilization. Due to concerns of side effects the National Association of EMS Physicians and the American College of Surgeons recommend its use only in those at high risk. This includes: those with blunt trauma who have a decreased level of consciousness, pain or tenderness in the spine, those with numbness or weakness believed to be due to a spinal injury, and those with a significant trauma mechanism that are intoxicated or have other major injuries. In those with a definite spinal cord injury immobilization is also recommended.
Neck
There is little high quality evidence for spinal motion stabilization of the neck before arrival at a hospital. Using a hard cervical collar and attaching a person to an EMS stretcher may be sufficient in those who were walking after the accident or during long transports. In those with penetrating neck or head trauma spinal immobilization may increase the risk of death. If intubation is required the cervical collar should be removed and inline stabilization provided.
Mid and low back
Spinal motion stabilization is not supported for penetrating trauma to back including that caused by gun shot wounds.
Cervical spine clearance
Paramedics are able to accurately determine who needs or does not need neck immobilization based on an algorithm. There are two main algorithms, the Canadian C-spine rule and NEXUS. The Canadian C-spine rule appears to be better. However, following either ru
Document 4:::
An injury is any physiological damage to living tissue caused by immediate physical stress. Injuries to humans can occur intentionally or unintentionally and may be caused by blunt trauma, penetrating trauma, burning, toxic exposure, asphyxiation, or overexertion. Injuries can occur in any part of the body, and different symptoms are associated with different injuries.
Treatment of a major injury is typically carried out by a health professional and varies greatly depending on the nature of the injury. Traffic collisions are the most common cause of accidental injury and injury-related death among humans. Injuries are distinct from chronic conditions, psychological trauma, infections, or medical procedures, though injury can be a contributing factor to any of these.
Several major health organizations have established systems for the classification and description of human injuries.
Occurrence
Injuries may be intentional or unintentional. Intentional injuries may be acts of violence against others or self-inflicted against one's own person. Accidental injuries may be unforeseeable, or they may be caused by negligence. In order, the most common types of unintentional injuries are traffic accidents, falls, drowning, burns, and accidental poisoning. Certain types of injuries are more common in developed countries or developing countries. Traffic injuries are more likely to kill pedestrians than drivers in developing countries. Scalding burns are more common in developed countries, while open-flame injuries are more common in developing countries.
As of 2021, approximately 4.4 million people are killed due to injuries each year worldwide, constituting nearly 8% of all deaths. 3.16 million of these injuries are unintentional, and 1.25 million are intentional. Traffic accidents are the most common form of deadly injury, causing about one-third of injury-related deaths. One-sixth are caused by suicide, and one-tenth are caused by homicide. Tens of millions of individ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When hit from behind in a car crash, a passenger can suffer a neck injury called what?
A. twisted neck
B. whiplash
C. necklash
D. inflammation
Answer:
|
|
sciq-1476
|
multiple_choice
|
Hydrogen peroxide is commonly sold as a 3% by volume solution for use as a what?
|
[
"disinfectant",
"surfactant",
"detergent",
"antiseptic"
] |
A
|
Relavent Documents:
Document 0:::
In organic chemistry, organic peroxides are organic compounds containing the peroxide functional group (). If the R′ is hydrogen, the compounds are called hydroperoxides, which are discussed in that article. The O−O bond of peroxides easily breaks, producing free radicals of the form (the dot represents an unpaired electron). Thus, organic peroxides are useful as initiators for some types of polymerization, such as the acrylic, unsaturated polyester, and vinyl ester resins used in glass-reinforced plastics. MEKP and benzoyl peroxide are commonly used for this purpose. However, the same property also means that organic peroxides can explosively combust. Organic peroxides, like their inorganic counterparts, are often powerful bleaching agents.
Types of organic peroxides
Organic peroxides are classified (i) by the presence or absence of a hydroxyl (-OH) terminus and (ii) by the presence of alkyl vs acyl substituents. One gap in the classes of organic peroxides is diphenyl peroxide. Quantum chemical calculations predict that it undergoes a nearly barrierless reaction akin to the benzidine rearrangement.
Properties
The O−O bond length in peroxides is about 1.45 Å, and the R−O−O angles (R = H, C) are about 110° (water-like). Characteristically, the C−O−O−R (R = H, C) dihedral angles are about 120°. The O−O bond is relatively weak, with a bond dissociation energy of , less than half the strengths of C−C, C−H, and C−O bonds.
Biology
Peroxides play important roles in biology. Many aspects of biodegradation or aging are attributed to the formation and decay of peroxides formed from oxygen in air. Countering these effects is an array of biological and artificial antioxidants.
Hundreds of peroxides and hydroperoxides are known, being derived from fatty acids, steroids, and terpenes. Fatty acids form a number of 1,2-dioxenes. The biosynthesis prostaglandins proceeds via an endoperoxide, a class of bicyclic peroxides.
In fireflies, oxidation of luciferins, which is cata
Document 1:::
Disinfectants
The most used disinfectants are those applying
active chlorine (i.e., hypochlorites, chloramines, dichloroisocyanurate and trichloroisocyanurate, wet chlorine, chlorine dioxide, etc.),
active oxygen (peroxides, such as peracetic acid, potassium persulfate, sodium perborate, sodium percarbonate, and urea perhydrate),
iodine (povidone-iodine, Lugol's solution, iodine tincture, iodinated nonionic surfactants),
concentrated alcohols (mainly ethanol, 1-propanol, called also n-propanol and 2-propanol, called isopropanol and mixtures thereof; further, 2-phenoxyethanol and 1- and 2-phenoxypropanols are used),
phenolic substances (such as phenol (also called "carbolic acid"), cresols such as thymol, halogenated (chlorinated, brominated) phenols, such as hexachlorophene, triclosan, trichlorophenol, tribromophenol, pentachlorophenol, salts and isomers thereof),
cationic surfactants, such as some quaternary ammonium cations (such as benzalkonium chloride, cetyl trimethylammonium bromide or chloride, didecyldimethylammonium chloride, cetylpyridinium chloride, benzethonium chloride) and others, non-quaternary compounds, such as chlorhexidine, glucoprotamine, octenidine dihydrochloride etc.),
strong oxidizers, such as ozone and permanganate solutions;
heavy metals and their salts, such as colloidal silver, silver nitrate, mercury chloride, phenylmercury salts, copper sulfate, copper oxide-chloride etc. Heavy metals and their salts are the most toxic and environment-hazardous bactericides and therefore their use is strongly discouraged or prohibited
strong acids (phosphoric, nitric, sulfuric, amidosulfuric, toluenesulfonic acids), pH < 1, and
alkali
Document 2:::
Hydroperoxides or peroxols are compounds of the form ROOH, which contain the hydroperoxy functional group (–OOH). The hydroperoxide anion () and the neutral hydroperoxyl radical (HOO·) consist of an unbond hydroperoxy group. When R is organic, the compounds are called organic hydroperoxides. Such compounds are a subset of organic peroxides, which have the formula ROOR. Organic hydroperoxides can either intentionally or unintentionally initiate explosive polymerisation in materials with unsaturated chemical bonds.
Properties
The O−O bond length in peroxides is about 1.45 Å, and the R−O−O angles (R = H, C) are about 110° (water-like). Characteristically, the C−O−O−H dihedral angles are about 120°. The O−O bond is relatively weak, with a bond dissociation energy of , less than half the strengths of C−C, C−H, and C−O bonds.
Hydroperoxides are typically more volatile than the corresponding alcohols:
tert-BuOOH (b.p. 36°C) vs tert-BuOH (b.p. 82-83°C)
CH3OOH (b.p. 46°C) vs CH3OH (b.p. 65°C)
cumene hydroperoxide (b.p. 153°C) vs cumyl alcohol (b.p. 202°C)
Miscellaneous reactions
Hydroperoxides are mildly acidic. The range is indicated by 11.5 for CH3OOH to 13.1 for Ph3COOH.
Hydroperoxides can be reduced to alcohols with lithium aluminium hydride, as described in this idealized equation:
4 ROOH + LiAlH4 → LiAlO2 + 2 H2O + 4 ROH
This reaction is the basis of methods for analysis of organic peroxides. Another way to evaluate the content of peracids and peroxides is the volumetric titration with alkoxides such as sodium ethoxide.
The phosphite esters and tertiary phosphines also effect reduction:
ROOH + PR3 → OPR3 + ROH
Uses
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Hydrogen peroxide is commonly sold as a 3% by volume solution for use as a what?
A. disinfectant
B. surfactant
C. detergent
D. antiseptic
Answer:
|
|
sciq-3449
|
multiple_choice
|
Transport vesicles move what type of molecules from the rough endoplasmic reticulum to the golgi apparatus?
|
[
"proteins",
"acids",
"hormones",
"lipids"
] |
A
|
Relavent Documents:
Document 0:::
Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. It is in contrast to transcellular transport, where the substances travel through the cell, passing through both the apical membrane and basolateral membrane.
The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, or by osmosis (for water) and solvent drag for solutes. Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated.
In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters. Paracellular absorption therefore plays only a minor role in glucose absorption, although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut.
Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport.
The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract. The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophi
Document 1:::
Transport by molecular motor proteins (Kinesin, Dynein and unconventional Myosin) is essential for cell functioning and survival. Studies of multiple motors are inspired by the fact that multiple motors are involved in many biological processes such as intra-cellular transport and mitosis. This increasing interest in modeling multiple motor transport is particularly due to improved understanding of single motor function. Several models have been proposed in recent year to understand the transport by multiple motors.
Models developed can be broadly divided into two categories (1) mean-field/steady state model and (2) stochastic model. The mean-field model is useful for describing transport by a large group of motors. In mean-field description, fluctuation in the forces that individual motors feel while pulling the cargo is ignored. In stochastic model, fluctuation in the forces that motors feel are not ignored. Steady-state/mean-field model is useful for modeling transport by a large group of motors whereas stochastic model is useful for modeling transport by few motors.
Document 2:::
Intraflagellar transport (IFT) is a bidirectional motility along axoneme microtubules that is essential for the formation (ciliogenesis) and maintenance of most eukaryotic cilia and flagella. It is thought to be required to build all cilia that assemble within a membrane projection from the cell surface. Plasmodium falciparum cilia and the sperm flagella of Drosophila are examples of cilia that assemble in the cytoplasm and do not require IFT. The process of IFT involves movement of large protein complexes called IFT particles or trains from the cell body to the ciliary tip and followed by their return to the cell body. The outward or anterograde movement is powered by kinesin-2 while the inward or retrograde movement is powered by cytoplasmic dynein 2/1b. The IFT particles are composed of about 20 proteins organized in two subcomplexes called complex A and B.
IFT was first reported in 1993 by graduate student Keith Kozminski while working in the lab of Dr. Joel Rosenbaum at Yale University. The process of IFT has been best characterized in the biflagellate alga Chlamydomonas reinhardtii as well as the sensory cilia of the nematode Caenorhabditis elegans.
It has been suggested based on localization studies that IFT proteins also function outside of cilia.
Biochemistry
Intraflagellar transport (IFT) describes the bi-directional movement of non-membrane-bound particles along the doublet microtubules of the flagellar, and motile cilia axoneme, between the axoneme and the plasma membrane. Studies have shown that the movement of IFT particles along the microtubule is carried out by two different microtubule motors; the anterograde (towards the flagellar tip) motor is heterotrimeric kinesin-2, and the retrograde (towards the cell body) motor is cytoplasmic dynein 1b. IFT particles carry axonemal subunits to the site of assembly at the tip of the axoneme; thus, IFT is necessary for axonemal growth. Therefore, since the axoneme needs a continually fresh supply of prote
Document 3:::
Molecular motors are natural (biological) or artificial molecular machines that are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors. One important difference between molecular motors and macroscopic motors is that molecular motors operate in the thermal bath, an environment in which the fluctuations due to thermal noise are significant.
Examples
Some examples of biologically important molecular motors:
Cytoskeletal motors
Myosins are responsible for muscle contraction, intracellular cargo transport, and producing cellular tension.
Kinesin moves cargo inside cells away from the nucleus along microtubules, in anterograde transport.
Dynein produces the axonemal beating of cilia and flagella and also transports cargo along microtubules towards the cell nucleus, in retrograde transport.
Polymerisation motors
Actin polymerization generates forces and can be used for propulsion. ATP is used.
Microtubule polymerization using GTP.
Dynamin is responsible for the separation of clathrin buds from the plasma membrane. GTP is used.
Rotary motors:
FoF1-ATP synthase family of proteins convert the chemical energy in ATP to the electrochemical potential energy of a proton gradient across a membrane or the other way around. The catalysis of the chemical reaction and the movement of protons are coupled to each other via the mechanical rotation of parts of the complex. This is involved in ATP synthesis in the mitochondria and chloroplasts as well as in pumping of protons across the vacuolar membrane.
The bacterial flagellum responsible for the swimming and tumbling of E. coli and other bacteria
Document 4:::
The Society of General Physiologists (SGP) is a scientific organization whose purpose is to promote and disseminate knowledge in the field of general physiology, and otherwise to advance understanding and interest in the subject of general physiology. The Society’s main office is located at the Marine Biological Laboratory in Woods Hole, MA, where the society was founded in 1946. Past Presidents of the Society include Richard W. Aldrich, Richard W. Tsien, Clay Armstrong, and Andrew Szent-Gyorgi. The society's archives is held at the National Library of Medicine in Bethesda, Maryland.
Membership
The Society's international membership is made up of nearly 600 career physiologists who work in academia, government, and industry. Membership in the Society is open to any individual actively interested in the field of general physiology and who has made significant contributions to knowledge in that field. The Society has become known for promoting research in many subfields of cellular and molecular physiology, but especially in the fields of membrane transport and ion channels, cell membrane structure, regulation, and dynamics, and cellular contractility and molecular motors.
Activities
The major activity of the Society is its annual symposium, which is held at the Marine Biological Laboratory in Woods Hole, MA. Society of General Physiologists symposia cover the forefront of physiological research and are small enough to maximize discussion and interaction among both young and established investigators. Abstracts of the annual meeting are published in The Journal of General Physiology.
The 2015 symposium (September 16–20) topic is "Macromolecular Local Signaling Complexes." Detailed information regarding the scientific agenda and registration is provided at the symposium website:
https://web.archive.org/web/20150801070408/http://www.sgpweb.org/symposium2015.html
Recent past symposium topics include:
2014 Sensory Transduction
2013 The Enigmatic Chloride Ion: Tra
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Transport vesicles move what type of molecules from the rough endoplasmic reticulum to the golgi apparatus?
A. proteins
B. acids
C. hormones
D. lipids
Answer:
|
|
sciq-7281
|
multiple_choice
|
When a supercooled liquid boils, the temperature drops as the liquid is converted to what?
|
[
"solid",
"atoms",
"vapor",
"carbon"
] |
C
|
Relavent Documents:
Document 0:::
Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization.
There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes.
Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria.
Boiling water is also used in several cooking methods including boiling, steaming, and poaching.
Types
Free convection
The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point.
Nucleate
Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature.
An irregular surface of the boiling
Document 1:::
Superheated water is liquid water under pressure at temperatures between the usual boiling point, and the critical temperature, . It is also known as "subcritical water" or "pressurized hot water". Superheated water is stable because of overpressure that raises the boiling point, or by heating it in a sealed vessel with a headspace, where the liquid water is in equilibrium with vapour at the saturated vapor pressure. This is distinct from the use of the term superheating to refer to water at atmospheric pressure above its normal boiling point, which has not boiled due to a lack of nucleation sites (sometimes experienced by heating liquids in a microwave).
Many of water's anomalous properties are due to very strong hydrogen bonding. Over the superheated temperature range the hydrogen bonds break, changing the properties more than usually expected by increasing temperature alone. Water becomes less polar and behaves more like an organic solvent such as methanol or ethanol. Solubility of organic materials and gases increases by several orders of magnitude and the water itself can act as a solvent, reagent, and catalyst in industrial and analytical applications, including extraction, chemical reactions and cleaning.
Change of properties with temperature
All materials change with temperature, but superheated water exhibits greater changes than would be expected from temperature considerations alone. Viscosity and surface tension of water drop and diffusivity increases with increasing temperature.
Self-ionization of water increases with temperature, and the pKw of water at 250 °C is closer to 11 than the more familiar 14 at 25 °C. This means the concentration of hydronium ion () and the concentration of hydroxide () are increased while the pH remains neutral. Specific heat capacity at constant pressure also increases with temperature, from 4.187 kJ/kg at 25 °C to 8.138 kJ/kg at 350 °C. A significant effect on the behaviour of water at high temperatures is decreased di
Document 2:::
In thermodynamics, superheating (sometimes referred to as boiling retardation, or boiling delay) is the phenomenon in which a liquid is heated to a temperature higher than its boiling point, without boiling. This is a so-called metastable state or metastate, where boiling might occur at any time, induced by external or internal effects. Superheating is achieved by heating a homogeneous substance in a clean container, free of nucleation sites, while taking care not to disturb the liquid.
This may occur by microwaving water in a very smooth container. Disturbing the water may cause an unsafe eruption of hot water and result in burns.
Cause
Water is said to "boil" when bubbles of water vapor grow without bound, bursting at the surface. For a vapor bubble to expand, the temperature must be high enough that the vapor pressure exceeds the ambient pressure (the atmospheric pressure, primarily). Below that temperature, a water vapor bubble will shrink and vanish.
Superheating is an exception to this simple rule; a liquid is sometimes observed not to boil even though its vapor pressure does exceed the ambient pressure. The cause is an additional force, the surface tension, which suppresses the growth of bubbles.
Surface tension makes the bubble act like an elastic balloon. The pressure inside is raised slightly by the "skin" attempting to contract. For the bubble to expand, the temperature must be raised slightly above the boiling point to generate enough vapor pressure to overcome both surface tension and ambient pressure.
What makes superheating so explosive is that a larger bubble is easier to inflate than a small one; just as when blowing up a balloon, the hardest part is getting started. It turns out the excess pressure due to surface tension is inversely proportional to the diameter of the bubble. That is, .
This can be derived by imagining a plane cutting a bubble into two halves. Each half is pulled towards the middle with a surface tension force , which
Document 3:::
Boiling-point elevation describes the phenomenon that the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope.
Explanation
The boiling point elevation is a colligative property, which means that it is dependent on the presence of dissolved particles and their number, but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures).
Put in vapor pressure terms, a liquid boils at the temperature when its vapor pressure equals the surrounding pressure. For the solvent, the presence of the solute decreases its vapor pressure by dilution. A nonvolatile solute has a vapor pressure of zero, so the vapor pressure of the solution is less than the vapor pressure of the solvent. Thus, a higher temperature is needed for the vapor pressure to reach the surrounding pressure, and the boiling point is elevated.
Put in chemical potential terms, at the boiling point, the liquid phase and the gas (or vapor) phase have the same chemical potential (or vapor pressure) meaning that they are energetically equivalent. The chemical potential is dependent on the temper
Document 4:::
A liquid–liquid critical point (or LLCP) is the endpoint of a liquid–liquid phase transition line (LLPT); it is a critical point where two types of local structures coexist at the exact ratio of unity. This hypothesis was first developed by Peter Poole, Francesco Sciortino, Uli Essmann and H. Eugene Stanley in Boston to obtain a quantitative understanding of the huge number of anomalies present in water.
Near a liquid–liquid critical point, there is always a competition between two alternative local structures. For instance, in supercooled water, two types of local structures have been predicted: a low-density local configuration (LD) and a high-density local configuration (HD), so above the critical pressure, the liquid is composed by a majority of HD local structure, while below the critical pressure a higher fraction of LD local configurations is present. The ratio between HD and LD configurations is determined according to the thermodynamic equilibrium of the system, which is often governed by external variables such as pressure and temperature.
The liquid–liquid critical point theory can be applied to several liquids that possess the tetrahedral symmetry. The study of liquid–liquid critical points is an active research area with hundreds of articles having been published, though only a few of these investigations have been experimental since most modern probing techniques are not fast and/or sensitive enough to study them.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When a supercooled liquid boils, the temperature drops as the liquid is converted to what?
A. solid
B. atoms
C. vapor
D. carbon
Answer:
|
|
sciq-1875
|
multiple_choice
|
Friction does negative work and removes some of the energy the person expends and converts it to which kind of energy?
|
[
"erosion",
"thermal",
"hydro",
"evaporation"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Sliding is a type of motion between two surfaces in contact. This can be contrasted to rolling motion. Both types of motion may occur in bearings.
The relative motion or tendency toward such motion between two surfaces is resisted by friction. Friction may damage or "wear" the surfaces in contact. However, wear can be reduced by lubrication. The science and technology of friction, lubrication, and wear is known as tribology.
Sliding may occur between two objects of arbitrary shape, whereas rolling friction is the frictional force associated with the rotational movement of a somewhat disclike or other circular object along a surface. Generally, the frictional force of rolling friction is less than that associated with sliding kinetic friction. Typical values for the coefficient of rolling friction are less than that of sliding friction. Correspondingly sliding friction typically produces greater sound and thermal bi-products. One of the most common examples of sliding friction is the movement of braking motor vehicle tires on a roadway, a process which generates considerable heat and sound, and is typically taken into account in assessing the magnitude of roadway noise pollution.
Sliding friction
Sliding friction (also called kinetic friction) is a contact force that resists the sliding motion of two objects or an object and a surface. Sliding friction is almost always less than that of static friction; this is why it is easier to move an object once it starts moving rather than to get the object to begin moving from a rest position.
Where , is the force of kinetic friction. is the coefficient of kinetic friction, and N is the normal force.
Examples of sliding friction
Sledding
Pushing an object across a surface
Rubbing one's hands together (The friction force generates heat.)
A car sliding on ice
A car skidding as it turns a corner
Opening a window
Almost any motion where there is contact between an object and a surface
Falling down a bowling
Document 2:::
In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force.
For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction.
Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by:
Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy.
History
The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me
Document 3:::
Belt friction is a term describing the friction forces between a belt and a surface, such as a belt wrapped around a bollard. When a force applies a tension to one end of a belt or rope wrapped around a curved surface, the frictional force between the two surfaces increases with the amount of wrap about the curved surface, and only part of that force (or resultant belt tension) is transmitted to the other end of the belt or rope. Belt friction can be modeled by the Belt friction equation.
In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a system determine how many times the belt or rope must be wrapped around a curved surface to prevent it from slipping. Mountain climbers and sailing crews demonstrate a working knowledge of belt friction when accomplishing tasks with ropes, pulleys, bollards and capstans.
Equation
The equation used to model belt friction is, assuming the belt has no mass and its material is a fixed composition:
where is the tension of the pulling side, is the tension of the resisting side, is the static friction coefficient, which has no units, and is the angle, in radians, formed by the first and last spots the belt touches the pulley, with the vertex at the center of the pulley.
The tension on the pulling side of the belt and pulley has the ability to increase exponentially if the magnitude of the belt angle increases (e.g. it is wrapped around the pulley segment numerous times).
Generalization for a rope lying on an arbitrary orthotropic surface
If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied:
1. No separation – normal reaction is positive for all points of the rope curve:
, where is a normal curvature of the rope curve.
2. Dragging coefficient of friction and angle are satisfying
Document 4:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Friction does negative work and removes some of the energy the person expends and converts it to which kind of energy?
A. erosion
B. thermal
C. hydro
D. evaporation
Answer:
|
|
sciq-10935
|
multiple_choice
|
Both pathways that isolate a population reproductively in some form, allopatric and sympatric describe what, which means the creation of new species?
|
[
"accumulation",
"extinction",
"speciation",
"bacterial"
] |
C
|
Relavent Documents:
Document 0:::
Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree.
Evolutionary trends
Differences between plant and animal physiology and reproduction cause minor differences in how they evolve.
One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life.
The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win
Document 1:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 2:::
Cladogenesis is an evolutionary splitting of a parent species into two distinct species, forming a clade.
This event usually occurs when a few organisms end up in new, often distant areas or when environmental changes cause several extinctions, opening up ecological niches for the survivors and causing population bottlenecks and founder effects changing allele frequencies of diverging populations compared to their ancestral population. The events that cause these species to originally separate from each other over distant areas may still allow both of the species to have equal chances of surviving, reproducing, and even evolving to better suit their environments while still being two distinct species due to subsequent natural selection, mutations and genetic drift.
Cladogenesis is in contrast to anagenesis, in which an ancestral species gradually accumulates change, and eventually, when enough is accumulated, the species is sufficiently distinct and different enough from its original starting form that it can be labeled as a new form - a new species. With anagenesis, the lineage in a phylogenetic tree does not split.
To determine whether a speciation event is cladogenesis or anagenesis, researchers may use simulation, evidence from fossils, molecular evidence from the DNA of different living species, or modelling. It has however been debated whether the distinction between cladogenesis and anagenesis is necessary at all in evolutionary theory.
See also
Anagenesis
Evolutionary biology
Speciation
Document 3:::
Taxon cycles refer to a biogeographical theory of how species evolve through range expansions and contractions over time associated with adaptive shifts in the ecology and morphology of species. The taxon cycle concept was explicitly formulated by biologist E. O. Wilson in 1961 after he surveyed the distributions, habitats, behavior and morphology of ant species in the Melanesian archipelago.
Stages of the taxon cycle
Wilson categorized species into evolutionary "stages", which today are commonly described in the outline by Ricklefs & Cox (1972). However, with the advent of molecular techniques to construct time-calibrated phylogenetic relationships between species, the taxon cycle concept was further developed to include well-defined temporal scales and combined with concepts from ecological succession and speciation cycle theories. Taxon cycles have mainly been described in island settings (archipelagos), where the distributions and movements of species are readily recognized, but may also occur in continental biota.
Stage I: Young, rapidly expanding, undifferentiated, widely and continuously distributed species in the initial colonization stage inhabiting small island, coastal or disturbed (marginal) habitat. Such species are hypothesized to include very good dispersers, ephemeral species and ecological "supertramps".
Stage II: Species that are generally widespread across many islands, but where geographical expansion has slowed, population differentiation has generated subspecies or incipient species, and local extinction on small islands may have created gaps in the distribution. This stage includes species that have maintained a relatively good dispersal ability such as "great speciators". Early-stage "species complexes" may consist of stage II species.
Stage III: Older, well-differentiated and well-defined species that have moved to habitats inland (and uphill) and where reduced dispersal ability and extinctions have fragmented the distribution to fewer
Document 4:::
Colonisation or colonization is the process in biology by which a species spreads to new areas. Colonisation often refers to successful immigration where a population becomes integrated into an ecological community, having resisted initial local extinction. In ecology, it is represented by the symbol λ (lowercase lambda) to denote the long-term intrinsic growth rate of a population.
One classic scientific model in biogeography posits that a species must continue to colonize new areas through its life cycle (called a taxon cycle) in order to achieve longevity. Accordingly, colonisation and extinction are key components of island biogeography, a theory that has many applications in ecology, such as metapopulations.
Scale
Colonisation occurs on several scales. In the most basic form, as biofilm in the formation of communities of microorganisms on surfaces. In small scales such as colonising new sites, perhaps as a result of environmental change. And on larger scales where a species expands its range to encompass new areas. This can be via a series of small encroachments, such as in woody plant encroachment, or by long-distance dispersal. The term range expansion is also used.
Use
The term is generally only used to refer to the spread of a species into new areas by natural means, as opposed to unnatural introduction or translocation by humans, which may lead to invasive species.
Colonisation events
Large-scale notable pre-historic colonisation events include:
Arthropods
the colonisation of the earth's land by the first animals, the arthropods. The first fossils of land animals come from millipedes. These were seen about 450 million years ago (Dunn, 2013).
Humans
the early human migration and colonisation of areas outside Africa according to the recent African origin paradigm, resulting in the extinction of Pleistocene megafauna, although the role of humans in this event is controversial.
Some large-scale notable colonisation events during the 20th century are:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Both pathways that isolate a population reproductively in some form, allopatric and sympatric describe what, which means the creation of new species?
A. accumulation
B. extinction
C. speciation
D. bacterial
Answer:
|
|
sciq-2975
|
multiple_choice
|
Where does digestion begin?
|
[
"oral cavity",
"spicule cavity",
"excretory system",
"gastrovascular cavity"
] |
D
|
Relavent Documents:
Document 0:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 1:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 2:::
The mouth is the body orifice through which many animals ingest food and vocalize. The body cavity immediately behind the mouth opening, known as the oral cavity (or in Latin), is also the first part of the alimentary canal which leads to the pharynx and the gullet. In tetrapod vertebrates, the mouth is bounded on the outside by the lips and cheeks — thus the oral cavity is also known as the buccal cavity (from Latin , meaning "cheek") — and contains the tongue on the inside. Except for some groups like birds and lissamphibians, vertebrates usually have teeth in their mouths, although some fish species have pharyngeal teeth instead of oral teeth.
Most bilaterian phyla, including arthropods, molluscs and chordates, have a two-opening gut tube with a mouth at one end and an anus at the other. Which end forms first in ontogeny is a criterion used to classify bilaterian animals into protostomes and deuterostomes.
Development
In the first multicellular animals, there was probably no mouth or gut and food particles were engulfed by the cells on the exterior surface by a process known as endocytosis. The particles became enclosed in vacuoles into which enzymes were secreted and digestion took place intracellularly. The digestive products were absorbed into the cytoplasm and diffused into other cells. This form of digestion is used nowadays by simple organisms such as Amoeba and Paramecium and also by sponges which, despite their large size, have no mouth or gut and capture their food by endocytosis.
However, most animals have a mouth and a gut, the lining of which is continuous with the epithelial cells on the surface of the body. A few animals which live parasitically originally had guts but have secondarily lost these structures. The original gut of diploblastic animals probably consisted of a mouth and a one-way gut. Some modern invertebrates still have such a system: food being ingested through the mouth, partially broken down by enzymes secreted in the gut, and t
Document 3:::
Hindgut fermentation is a digestive process seen in monogastric herbivores, animals with a simple, single-chambered stomach. Cellulose is digested with the aid of symbiotic bacteria. The microbial fermentation occurs in the digestive organs that follow the small intestine: the large intestine and cecum. Examples of hindgut fermenters include proboscideans and large odd-toed ungulates such as horses and rhinos, as well as small animals such as rodents, rabbits and koalas. In contrast, foregut fermentation is the form of cellulose digestion seen in ruminants such as cattle which have a four-chambered stomach, as well as in sloths, macropodids, some monkeys, and one bird, the hoatzin.
Cecum
Hindgut fermenters generally have a cecum and large intestine that are much larger and more complex than those of a foregut or midgut fermenter. Research on small cecum fermenters such as flying squirrels, rabbits and lemurs has revealed these mammals to have a GI tract about 10-13 times the length of their body. This is due to the high intake of fiber and other hard to digest compounds that are characteristic to the diet of monogastric herbivores. Unlike in foregut fermenters, the cecum is located after the stomach and small intestine in monogastric animals, which limits the amount of further digestion or absorption that can occur after the food is fermented.
Large intestine
In smaller hindgut fermenters of the order Lagomorpha (rabbits, hares, and pikas), cecotropes formed in the cecum are passed through the large intestine and subsequently reingested to allow another opportunity to absorb nutrients. Cecotropes are surrounded by a layer of mucus which protects them from stomach acid but which does not inhibit nutrient absorption in the small intestine. Coprophagy is also practiced by some rodents, such as the capybara, guinea pig and related species, and by the marsupial common ringtail possum. This process is also beneficial in allowing for restoration of the microflora pop
Document 4:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does digestion begin?
A. oral cavity
B. spicule cavity
C. excretory system
D. gastrovascular cavity
Answer:
|
|
sciq-8271
|
multiple_choice
|
If two populations do not mate and produce fertile offspring, what results?
|
[
"large species",
"separate species",
"same species",
"small species"
] |
B
|
Relavent Documents:
Document 0:::
Reinforcement is a process within speciation where natural selection increases the reproductive isolation between two populations of species by reducing the production of hybrids. Evidence for speciation by reinforcement has been gathered since the 1990s, and along with data from comparative studies and laboratory experiments, has overcome many of the objections to the theory. Differences in behavior or biology that inhibit formation of hybrid zygotes are termed prezygotic isolation. Reinforcement can be shown to be occurring (or to have occurred in the past) by measuring the strength of prezygotic isolation in a sympatric population in comparison to an allopatric population of the same species. Comparative studies of this allow for determining large-scale patterns in nature across various taxa. Mating patterns in hybrid zones can also be used to detect reinforcement. Reproductive character displacement is seen as a result of reinforcement, so many of the cases in nature express this pattern in sympatry. Reinforcement's prevalence is unknown, but the patterns of reproductive character displacement are found across numerous taxa (vertebrates, invertebrates, plants, and fungi), and is considered to be a common occurrence in nature. Studies of reinforcement in nature often prove difficult, as alternative explanations for the detected patterns can be asserted. Nevertheless, empirical evidence exists for reinforcement occurring across various taxa and its role in precipitating speciation is conclusive.
Evidence from nature
Amphibians
The two frog species Litoria ewingi and L. verreauxii live in southern Australia with their two ranges overlapping. The species have very similar calls in allopatry, but express clinal variation in sympatry, with notable distinctness in calls that generate female preference discrimination. The zone of overlap sometimes forms hybrids and is thought to originate by secondary contact of once fully allopatric populations.
Allopatric populat
Document 1:::
The "Vicar of Bray" hypothesis (or Fisher-Muller Model) attempts to explain why sexual reproduction might have advantages over asexual reproduction. Reproduction is the process by which organisms give rise to offspring. Asexual reproduction involves a single parent and results in offspring that are genetically identical to each other and to the parent.
In contrast to asexual reproduction, sexual reproduction involves two parents. Both the parents produce gametes through meiosis, a special type of cell division that reduces the chromosome number by half. During an early stage of meiosis, before the chromosomes are separated in the two daughter cells, the chromosomes undergo genetic recombination. This allows them to exchange some of their genetic information. Therefore, the gametes from a single organism are all genetically different from each other. The process in which the two gametes from the two parents unite is called fertilization. Half of the genetic information from both parents is combined. This results in offspring that are genetically different from each other and from the parents.
In short, sexual reproduction allows a continuous rearrangement of genes. Therefore, the offspring of a population of sexually reproducing individuals will show a more varied selection of phenotypes. Due to faster attainment of favorable genetic combinations, sexually reproducing populations evolve more rapidly in response to environmental changes. Under the Vicar of Bray hypothesis, sex benefits a population as a whole, but not individuals within it, making it a case of group selection.
Disadvantage of sexual reproduction
Sexual reproduction often takes a lot of effort. Finding a mate can sometimes be an expensive, risky and time consuming process. Courtship, copulation and taking care of the new born offspring may also take up a lot of time and energy. From this point of view, asexual reproduction may seem a lot easier and more efficient. But another important thing to co
Document 2:::
In population genetics overlapping generations refers to mating systems where more than one breeding generation is present at any one time. In systems where this is not the case there are non-overlapping generations (or discrete generations) in which every breeding generation lasts just one breeding season. If the adults reproduce over multiple breeding seasons the species is considered to have overlapping generations. Examples of species which have overlapping generations are many mammals, including humans, and many invertebrates in seasonal environments. Examples of species which consist of non-overlapping generations are annual plants and several insect species.
Non-overlapping generations is one of the characteristics that needs to be met in the Hardy–Weinberg model for evolution to occur. This is a very restrictive and unrealistic assumption, but one that is difficult to dispose of.
Overlapping versus non-overlapping generations
In population genetics models, such as the Hardy–Weinberg model, it is assumed that species have no overlapping generations. In nature, however, many species do have overlapping generations. The overlapping generations are considered the norm rather than the exception.
Overlapping generations are found in species that live for many years, and reproduce many times. Many birds, for instance, have new nests every (couple of) year(s). Therefore, the offspring will, after they have matured, also have their own nests of offspring while the parent generation could be breeding again as well. An advantage of overlapping generations can be found in the different experience levels of generations in a population. The younger age group will be able to acquire social information from the older and more experienced age groups. Overlapping generations can, similarly, promote altruistic behaviour.
Non-overlapping generations are found in species in which the adult generation dies after one breeding season. If a species for instance can only survive
Document 3:::
The mechanisms of reproductive isolation are a collection of evolutionary mechanisms, behaviors and physiological processes critical for speciation. They prevent members of different species from producing offspring, or ensure that any offspring are sterile. These barriers maintain the integrity of a species by reducing gene flow between related species.
The mechanisms of reproductive isolation have been classified in a number of ways. Zoologist Ernst Mayr classified the mechanisms of reproductive isolation in two broad categories: pre-zygotic for those that act before fertilization (or before mating in the case of animals) and post-zygotic for those that act after it. The mechanisms are genetically controlled and can appear in species whose geographic distributions overlap (sympatric speciation) or are separate (allopatric speciation).
Pre-zygotic isolation
Pre-zygotic isolation mechanisms are the most economic in terms of the natural selection of a population, as resources are not wasted on the production of a descendant that is weak, non-viable or sterile. These mechanisms include physiological or systemic barriers to fertilization.
Temporal or habitat isolation
Any of the factors that prevent potentially fertile individuals from meeting will reproductively isolate the members of distinct species. The types of barriers that can cause this isolation include: different habitats, physical barriers, and a difference in the time of sexual maturity or flowering.
An example of the ecological or habitat differences that impede the meeting of potential pairs occurs in two fish species of the family Gasterosteidae (sticklebacks). One species lives all year round in fresh water, mainly in small streams. The other species lives in the sea during winter, but in spring and summer individuals migrate to river estuaries to reproduce. The members of the two populations are reproductively isolated due to their adaptations to distinct salt concentrations.
An example of rep
Document 4:::
Hybrid incompatibility is a phenomenon in plants and animals, wherein offspring produced by the mating of two different species or populations have reduced viability and/or are less able to reproduce. Examples of hybrids include mules and ligers from the animal world, and subspecies of the Asian rice crop Oryza sativa from the plant world. Multiple models have been developed to explain this phenomenon. Recent research suggests that the source of this incompatibility is largely genetic, as combinations of genes and alleles prove lethal to the hybrid organism. Incompatibility is not solely influenced by genetics, however, and can be affected by environmental factors such as temperature. The genetic underpinnings of hybrid incompatibility may provide insight into factors responsible for evolutionary divergence between species.
Background
Hybrid incompatibility occurs when the offspring of two closely related species are not viable or suffer from infertility. Charles Darwin posited that hybrid incompatibility is not a product of natural selection, stating that the phenomenon is an outcome of the hybridizing species diverging, rather than something that is directly acted upon by selective pressures. The underlying causes of the incompatibility can be varied: earlier research focused on things like changes in ploidy in plants. More recent research has taken advantage of improved molecular techniques and has focused on the effects of genes and alleles in the hybrid and its parents.
Dobzhansky-Muller model
The first major breakthrough in the genetic basis of hybrid incompatibility is the Dobzhansky-Muller model, a combination of findings by Theodosius Dobzhansky and Joseph Muller between 1937 and 1942. The model provides an explanation as to why a negative fitness effect like hybrid incompatibility is not selected against. By hypothesizing that the incompatibility arose from alterations at two or more loci, rather than one, the incompatible alleles are in one hybrid in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
If two populations do not mate and produce fertile offspring, what results?
A. large species
B. separate species
C. same species
D. small species
Answer:
|
|
sciq-3029
|
multiple_choice
|
The roots of a plant take in nutrients and what vital substance?
|
[
"water",
"air",
"Soil",
"Ash"
] |
A
|
Relavent Documents:
Document 0:::
Plant nutrition is the study of the chemical elements and compounds necessary for plant growth and reproduction, plant metabolism and their external supply. In its absence the plant is unable to complete a normal life cycle, or that the element is part of some essential plant constituent or metabolite. This is in accordance with Justus von Liebig’s law of the minimum. The total essential plant nutrients include seventeen different elements: carbon, oxygen and hydrogen which are absorbed from the air, whereas other nutrients including nitrogen are typically obtained from the soil (exceptions include some parasitic or carnivorous plants).
Plants must obtain the following mineral nutrients from their growing medium:
the macronutrients: nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg)
the micronutrients (or trace minerals): iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo), nickel (Ni)
These elements stay beneath soil as salts, so plants absorb these elements as ions. The macronutrients are taken-up in larger quantities; hydrogen, oxygen, nitrogen and carbon contribute to over 95% of a plant's entire biomass on a dry matter weight basis. Micronutrients are present in plant tissue in quantities measured in parts per million, ranging from 0.1 to 200 ppm, or less than 0.02% dry weight.
Most soil conditions across the world can provide plants adapted to that climate and soil with sufficient nutrition for a complete life cycle, without the addition of nutrients as fertilizer. However, if the soil is cropped it is necessary to artificially modify soil fertility through the addition of fertilizer to promote vigorous growth and increase or sustain yield. This is done because, even with adequate water and light, nutrient deficiency can limit growth and crop yield.
History
Carbon, hydrogen and oxygen are the basic nutrients plants receive from air and water. Justus von Liebig proved in 1840 tha
Document 1:::
Root vegetables are underground plant parts eaten by humans as food. Although botany distinguishes true roots (such as taproots and tuberous roots) from non-roots (such as bulbs, corms, rhizomes, and tubers, although some contain both hypocotyl and taproot tissue), the term "root vegetable" is applied to all these types in agricultural and culinary usage (see terminology of vegetables).
Root vegetables are generally storage organs, enlarged to store energy in the form of carbohydrates. They differ in the concentration and the balance among starches, sugars, and other types of carbohydrate. Of particular economic importance are those with a high carbohydrate concentration in the form of starch; starchy root vegetables are important staple foods, particularly in tropical regions, overshadowing cereals throughout much of Central Africa, West Africa and Oceania, where they are used directly or mashed to make foods such as fufu or poi.
Many root vegetables keep well in root cellars, lasting several months. This is one way of storing food for use long after harvest, which is especially important in nontropical latitudes, where winter is traditionally a time of little to no harvesting. There are also season extension methods that can extend the harvest throughout the winter, mostly through the use of polytunnels.
List of root vegetables
The following list classifies root vegetables organized by their roots' anatomy.
Modified plant stem
Corm
Amorphophallus konjac (konjac)
Colocasia esculenta (taro)
Eleocharis dulcis (Chinese water chestnut)
Ensete spp. (enset)
Nymphaea spp. (waterlily)
Pteridium esculentum
Sagittaria spp. (arrowhead or wapatoo)
Typha spp.
Xanthosoma spp. (malanga, cocoyam, tannia, yautia and other names)
Colocasia antiquorum (eddoe or Japanese potato)
Bulb
Allium cepa (onion)
Allium sativum (garlic)
Camassia quamash (blue camas)
Foeniculum vulgare (fennel)
Rhizome
Curcuma longa (turmeric)
Panax ginseng (ginseng)
Arthropodium spp. (
Document 2:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 3:::
Root hair, or absorbent hairs, are outgrowths of epidermal cells, specialized cells at the tip of a plant root. They are lateral extensions of a single cell and are only rarely branched. They are found in the region of maturation, of the root. Root hair cells improve plant water absorption by increasing root surface area to volume ratio which allows the root hair cell to take in more water. The large vacuole inside root hair cells makes this intake much more efficient. Root hairs are also important for nutrient uptake as they are main interface between plants and mycorrhizal fungi.
Function
The function of all root hairs is to collect water and mineral nutrients in the soil to be sent throughout the plant. In roots, most water absorption happens through the root hairs. The length of root hairs allows them to penetrate between soil particles and prevents harmful bacterial organisms from entering the plant through the xylem vessels. Increasing the surface area of these hairs makes plants more efficient in absorbing nutrients and interacting with microbes. As root hair cells do not carry out photosynthesis, they do not contain chloroplasts.
Importance
Root hairs form an important surface as they are needed to absorb most of the water and nutrients needed for the plant. They are also directly involved in the formation of root nodules in legume plants. The root hairs curl around the bacteria, which allows for the formation of an infection thread into the dividing cortical cells to form the nodule.
Having a large surface area, the active uptake of water and minerals through root hairs is highly efficient. Root hair cells also secrete acids (e.g., malic and citric acid), which solubilize minerals by changing their oxidation state, making the ions easier to absorb.
Formation
Root hair cells vary between 15 and 17 micrometers in diameter, and 80 and 1,500 micrometers in length. Root hairs are found only in the zone of maturation, also called the zone of differentiation.
Document 4:::
A dimorphic root system is a plant root system with two distinct root forms, which are adapted to perform different functions. One of the most common manifestations is in plants with both a taproot, which grows straight down to the water table, from which it obtains water for the plant; and a system of lateral roots, which obtain nutrients from superficial soil layers near the surface. Many plants with dimorphic root systems adapt the levels of rainfall in the surrounding area, growing many surface roots when there is heavy rainfall, and relying on a taproot when rain is scarce. Because of their adaptability to water levels in the surrounding area, most plants with dimorphic root systems live in arid climates with common wet and dry periods.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The roots of a plant take in nutrients and what vital substance?
A. water
B. air
C. Soil
D. Ash
Answer:
|
|
sciq-7582
|
multiple_choice
|
What is the most common way to classify stars?
|
[
"by color",
"by size",
"by age",
"by distance"
] |
A
|
Relavent Documents:
Document 0:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 3:::
A color–color diagram is a means of comparing the colors of an astronomical object at different wavelengths. Astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. The difference in brightness between two bands is referred to as color. On color–color diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and the color defined by another brightness difference will be plotted on the vertical axis.
Background
Although stars are not perfect blackbodies, to first order the spectra of light emitted by stars conforms closely to a black-body radiation curve, also referred to sometimes as a thermal radiation curve. The overall shape of a black-body curve is uniquely determined by its temperature, and the wavelength of peak intensity is inversely proportional to temperature, a relation known as Wien's Displacement Law. Thus, observation of a stellar spectrum allows determination of its effective temperature. Obtaining complete spectra for stars through spectrometry is much more involved than simple photometry in a few bands. Thus by comparing the magnitude of the star in multiple different color indices, the effective temperature of the star can still be determined, as magnitude differences between each color will be unique for that temperature. As such, color-color diagrams can be used as a means of representing the stellar population, much like a Hertzsprung–Russell diagram, and stars of different spectral classes will inhabit different parts of the diagram. This feature leads to applications within various wavelength bands.
In the stellar locus, stars tend to align in a more or less straight feature. If stars were perfect black bodies, the stellar locus would be a pure straight line indeed. The divergences with the straight line are due to the absorptions and emission lines in the stellar spectra. These divergences can be more or less evident depending
Document 4:::
Starspots are stellar phenomena, so-named by analogy with sunspots.
Spots as small as sunspots have not been detected on other stars, as they would cause undetectably small fluctuations in brightness. The commonly observed starspots are in general much larger than those on the Sun: up to about 30% of the stellar surface may be covered, corresponding to starspots 100 times larger than those on the Sun.
Detection and measurements
To detect and measure the extent of starspots one uses several types of methods.
For rapidly rotating stars – Doppler imaging and Zeeman-Doppler imaging. With the Zeeman-Doppler imaging technique the direction of the magnetic field on stars can be determined since spectral lines are split according to the Zeeman effect, revealing the direction and magnitude of the field.
For slowly rotating stars – Line Depth Ratio (LDR). Here one measures two different spectral lines, one sensitive to temperature and one which is not. Since starspots have a lower temperature than their surroundings the temperature-sensitive line changes its depth. From the difference between these two lines the temperature and size of the spot can be calculated, with a temperature accuracy of 10K.
For eclipsing binary stars – Eclipse mapping produces images and maps of spots on both stars.
For giant binary stars - Very-long-baseline interferometry
For stars with transiting extrasolar planets – Light curve variations.
Temperature
Observed starspots have a temperature which is in general 500–2000 kelvins cooler than the stellar photosphere. This temperature difference could give rise to a brightness variation up to 0.6 magnitudes between the spot and the surrounding surface. There also seems to be a relation between the spot temperature and the temperature for the stellar photosphere, indicating that starspots behave similarly for different types of stars (observed in G–K dwarfs).
Lifetimes
The lifetime for a starspot depends on its size.
For small spots the lifetim
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the most common way to classify stars?
A. by color
B. by size
C. by age
D. by distance
Answer:
|
|
ai2_arc-912
|
multiple_choice
|
The outer layers of astronauts' space suits are reflective in order to protect them from
|
[
"the vacuum of space.",
"intense sunlight.",
"micrometeoroids.",
"water loss."
] |
B
|
Relavent Documents:
Document 0:::
Mission specialist (MS) was a specific position held by certain NASA astronauts who were tasked with conducting a range of scientific, medical, or engineering experiments during a spaceflight mission. These specialists were usually assigned to a specific field of expertise that was related to the goals of the particular mission they were assigned to.
Mission specialists were highly trained individuals who underwent extensive training in preparation for their missions. They were required to have a broad range of skills, including knowledge of science and engineering, as well as experience in operating complex equipment in a zero-gravity environment.
During a mission, mission specialists were responsible for conducting experiments, operating equipment, and performing spacewalks to repair or maintain equipment outside the spacecraft. They also played a critical role in ensuring the safety of the crew by monitoring the spacecraft's systems and responding to emergencies as needed.
The role of mission specialist was an important one in the Space Shuttle program, as they were instrumental in the success of the program's many scientific and engineering missions. Many of the advances in science and technology that were made during this period were made possible by the hard work and dedication of the mission specialists who worked tirelessly to push the boundaries of what was possible in space.
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Bioastronautics is a specialty area of biological and astronautical research which encompasses numerous aspects of biological, behavioral, and medical concern governing humans and other living organisms in a space flight environment; and includes design of payloads, space habitats, and life-support systems. In short, it spans the study and support of life in space.
Bioastronautics includes many similarities with its sister discipline astronautical hygiene; they both study the hazards that humans may encounter during a space flight. However, astronautical hygiene differs in many respects e.g. in this discipline, once a hazard is identified, the exposure risks are then assessed and the most effective measures determined to prevent or control exposure and thereby protect the health of the astronaut. Astronautical hygiene is an applied scientific discipline that requires knowledge and experience of many fields including bioastronautics, space medicine, ergonomics etc. The skills of astronautical hygiene are already being applied for example, to characterise Moon dust and design the measures to mitigate exposure during lunar exploration, to develop accurate chemical monitoring techniques and use the results in the setting SMACs.
Of particular interest from a biological perspective are the effects of reduced gravitational force felt by inhabitants of spacecraft. Often referred to as "microgravity", the lack of sedimentation, buoyancy, or convective flows in fluids results in a more quiescent cellular and intercellular environment primarily driven by chemical gradients. Certain functions of organisms are mediated by gravity, such as gravitropism in plant roots and negative gravitropism in plant stems, and without this stimulus growth patterns of organisms onboard spacecraft often diverge from their terrestrial counterparts. Additionally, metabolic energy normally expended in overcoming the force of gravity remains available for other functions. This may take
Document 3:::
Biolab (Biological Experiment Laboratory) is a single-rack multi-user science payload designed for use in the Columbus laboratory of the International Space Station. Biolab support biological research on small plants, small invertebrates, microorganisms, animal cells, and tissue cultures. It includes an incubator equipped with centrifuges in which the preceding experimental subjects can be subjected to controlled levels of accelerations.
These experiments help to identify "the role that microgravity plays at all levels of an organism, from the effects on a single cell up to a complex organism including humans."
Description
Summary :
BioLab provides an on-orbit biology laboratory that enables scientists to study the effects of microgravity and space radiation on unicellular and multicellular organisms, including bacteria, insects, protists (simple eukaryotic organisms), seeds, and cells.
The BioLab facility includes an incubator, microscope, spectrophotometer (instrument used to measure the spectrum of light absorbed by a sample), and two centrifuges to provide artificial gravity. BioLab allows researchers to illuminate and observe individual experiment containers (ECs), and BioLab's life support system can regulate the content of the atmosphere (including humidity).
BioLab is integrated into a single International Standard Payload Rack (ISPR) within the European Columbus laboratory, which was launched on space shuttle mission STS-122.
Results from BioLab experiments could affect biomedical research in areas such as immunology, pharmacology, bone demineralization, cellular signal transduction (the processing of electrochemical stimuli in cells), cellular repair, and biotechnology.
The BioLab facility, which has been integrated into a single International Standard Payload Rack (ISPR) in the European Columbus laboratory, is divided into two sections: the automated section, or core unit, and the manual section, designed for crew interaction with the experiments. The
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The outer layers of astronauts' space suits are reflective in order to protect them from
A. the vacuum of space.
B. intense sunlight.
C. micrometeoroids.
D. water loss.
Answer:
|
|
sciq-6958
|
multiple_choice
|
What process typically occurs to metal exposed to outside elements?
|
[
"shrinkage",
"extraction",
"corrosion",
"explosion"
] |
C
|
Relavent Documents:
Document 0:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
Document 1:::
In metallurgy and materials science, annealing is a heat treatment that alters the physical and sometimes chemical properties of a material to increase its ductility and reduce its hardness, making it more workable. It involves heating a material above its recrystallization temperature, maintaining a suitable temperature for an appropriate amount of time and then cooling.
In annealing, atoms migrate in the crystal lattice and the number of dislocations decreases, leading to a change in ductility and hardness. As the material cools it recrystallizes. For many alloys, including carbon steel, the crystal grain size and phase composition, which ultimately determine the material properties, are dependent on the heating rate and cooling rate. Hot working or cold working after the annealing process alters the metal structure, so further heat treatments may be used to achieve the properties required. With knowledge of the composition and phase diagram, heat treatment can be used to adjust from harder and more brittle to softer and more ductile.
In the case of ferrous metals, such as steel, annealing is performed by heating the material (generally until glowing) for a while and then slowly letting it cool to room temperature in still air. Copper, silver and brass can be either cooled slowly in air, or quickly by quenching in water. In this fashion, the metal is softened and prepared for further work such as shaping, stamping, or forming.
Many other materials, including glass and plastic films, use annealing to improve the finished properties.
Thermodynamics
Annealing occurs by the diffusion of atoms within a solid material, so that the material progresses towards its equilibrium state. Heat increases the rate of diffusion by providing the energy needed to break bonds. The movement of atoms has the effect of redistributing and eradicating the dislocations in metals and (to a lesser extent) in ceramics. This alteration to existing dislocations allows a metal object to def
Document 2:::
Physical metallurgy is one of the two main branches of the scientific approach to metallurgy, which considers in a systematic way the physical properties of metals and alloys. It is basically the fundamentals and applications of the theory of phase transformations in metal and alloys, as the title of classic, challenging monograph on the subject with this title . While chemical metallurgy involves the domain of reduction/oxidation of metals, physical metallurgy deals mainly with mechanical and magnetic/electric/thermal properties of metals – treated by the discipline of solid state physics. Calphad methodology, able to produce Phase diagrams which is the basis for evaluating or estimating physical properties of metals, relies on Computational thermodynamics i.e. on Chemical thermodynamics and could be considered a common and useful field for both the two sub-disciplines.
This article has flagged.
See also
Extractive metallurgy
Metallurgical (and Materials) Transactions, a peer-review journal covering Physical Metallurgy and Materials Science
Scientific journals
Metallurgical and Materials Transactions A – open access articles
Metallurgical and Materials Transactions B – open access articles
Acta Materialia – open access articles
Journal of Alloys and Compounds – open access articles
External links
MIT Ocw (MIT OpenCourseWare) Course on Physical Metallurgy
The classic, extensive book single authored book on the subject
A concise, yet not simplified single authored textbook on Physical Metallurgy
A series of Lectures by Prof. "Harry" Harshad Bhadeshia, University of Cambridge on the Physical Metallurgy of Steels
Additional teaching materials by Prof. "Harry" Harshad Bhadeshia, University of Cambridge, at the Phase Transformations & Complex
Materials science
Metallurgy
Document 3:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 4:::
Metalworking is the process of shaping and reshaping metals to create useful objects, parts, assemblies, and large scale structures. As a term it covers a wide and diverse range of processes, skills, and tools for producing objects on every scale: from huge ships, buildings, and bridges down to precise engine parts and delicate jewelry.
The historical roots of metalworking predate recorded history; its use spans cultures, civilizations and millennia. It has evolved from shaping soft, native metals like gold with simple hand tools, through the smelting of ores and hot forging of harder metals like iron, up to highly technical modern processes such as machining and welding. It has been used as an industry, a driver of trade, individual hobbies, and in the creation of art; it can be regarded as both a science and a craft.
Modern metalworking processes, though diverse and specialized, can be categorized into one of three broad areas known as forming, cutting, or joining processes. Modern metalworking workshops, typically known as machine shops, hold a wide variety of specialized or general-use machine tools capable of creating highly precise, useful products. Many simpler metalworking techniques, such as blacksmithing, are no longer economically competitive on a large scale in developed countries; some of them are still in use in less developed countries, for artisanal or hobby work, or for historical reenactment.
Prehistory
The oldest archaeological evidence of copper mining and working was the discovery of a copper pendant in northern Iraq from 8,700 BCE. The earliest substantiated and dated evidence of metalworking in the Americas was the processing of copper in Wisconsin, near Lake Michigan. Copper was hammered until it became brittle, then heated so it could be worked further. In America, this technology is dated to about 4000–5000 BCE. The oldest gold artifacts in the world come from the Bulgarian Varna Necropolis and date from 4450 BCE.
Not all metal required
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What process typically occurs to metal exposed to outside elements?
A. shrinkage
B. extraction
C. corrosion
D. explosion
Answer:
|
|
sciq-10079
|
multiple_choice
|
Polychaetes make up a large and diverse group. which category do the majority of these worms fall into?
|
[
"marine",
"carnivorous",
"terrestrial",
"amphibian"
] |
A
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
A polyp in zoology is one of two forms found in the phylum Cnidaria, the other being the medusa. Polyps are roughly cylindrical in shape and elongated at the axis of the vase-shaped body. In solitary polyps, the aboral (opposite to oral) end is attached to the substrate by means of a disc-like holdfast called a pedal disc, while in colonies of polyps it is connected to other polyps, either directly or indirectly. The oral end contains the mouth, and is surrounded by a circlet of tentacles.
Classes
In the class Anthozoa, comprising the sea anemones and corals, the individual is always a polyp; in the class Hydrozoa, however, the individual may be either a polyp or a medusa, with most species undergoing a life cycle with both a polyp stage and a medusa stage. In class Scyphozoa, the medusa stage is dominant, and the polyp stage may or may not be present, depending on the family. In those scyphozoans that have the larval planula metamorphose into a polyp, the polyp, also called a "scyphistoma," grows until it develops a stack of plate-like medusae that pinch off and swim away in a process known as strobilation. Once strobilation is complete, the polyp may die, or regenerate itself to repeat the process again later. With Cubozoans, the planula settles onto a suitable surface, and develops into a polyp. The cubozoan polyp then eventually metamorphoses directly into a Medusa.
Anatomy
The body of the polyp may be roughly compared in a structure to a sac, the wall of which is composed of two layers of cells. The outer layer is known technically as the ectoderm, the inner layer as the endoderm (or gastroderm). Between ectoderm and endoderm is a supporting layer of structureless gelatinous substance termed mesoglea, secreted by the cell layers of the body wall. The mesoglea can be thinner than the endoderm or ectoderm or comprise the bulk of the body as in larger jellyfish. The mesoglea can contain skeletal elements derived from cells migrated from ectoderm.
Th
Document 2:::
The nematodes ( or ; ; ) roundworms or eelworms, constitute the phylum Nematoda. They are a diverse animal phylum inhabiting a broad range of environments. Most species are free-living, feeding on microorganisms, but there are many that are parasitic. The parasitic worms (helminths) are the cause of soil-transmitted helminthiases.
They are taxonomically classified along with arthropods, tardigrades and other moulting animals in the clade Ecdysozoa. Unlike the vaguely similar flatworms, nematodes have a tubular digestive system, with openings at both ends. Like tardigrades, they have a reduced number of Hox genes, but their sister phylum Nematomorpha has kept the ancestral protostome Hox genotype, which shows that the reduction has occurred within the nematode phylum.
Nematode species can be difficult to distinguish from one another. Consequently, estimates of the number of nematode species are uncertain. A 2013 survey of animal biodiversity published in the mega journal Zootaxa puts this figure at over 25,000. Estimates of the total number of extant species are subject to even greater variation. A widely referenced article published in 1993 estimated there may be over 1 million species of nematode. A subsequent publication challenged this claim, estimating the figure to be at least 40,000 species. Although the highest estimates (up to 100 million species) have since been deprecated, estimates supported by rarefaction curves, together with the use of DNA barcoding and the increasing acknowledgment of widespread cryptic species among nematodes, have placed the figure closer to 1 million species.
Nematodes have successfully adapted to nearly every ecosystem: from marine (salt) to fresh water, soils, from the polar regions to the tropics, as well as the highest to the lowest of elevations. They are ubiquitous in freshwater, marine, and terrestrial environments, where they often outnumber other animals in both individual and species counts, and are found in locations
Document 3:::
Endoparasites
Protozoan organisms
Helminths (worms)
Helminth organisms (also called helminths or intestinal worms) include:
Tapeworms
Flukes
Roundworms
Other organisms
Ectoparasites
Document 4:::
Worms are many different distantly related bilateral animals that typically have a long cylindrical tube-like body, no limbs, and no eyes (though not always).
Worms vary in size from microscopic to over in length for marine polychaete worms (bristle worms); for the African giant earthworm, Microchaetus rappi; and for the marine nemertean worm (bootlace worm), Lineus longissimus. Various types of worm occupy a small variety of parasitic niches, living inside the bodies of other animals. Free-living worm species do not live on land but instead live in marine or freshwater environments or underground by burrowing.
In biology, "worm" refers to an obsolete taxon, vermes, used by Carolus Linnaeus and Jean-Baptiste Lamarck for all non-arthropod invertebrate animals, now seen to be paraphyletic. The name stems from the Old English word wyrm. Most animals called "worms" are invertebrates, but the term is also used for the amphibian caecilians and the slowworm Anguis, a legless burrowing lizard. Invertebrate animals commonly called "worms" include annelids (earthworms and marine polychaete or bristle worms), nematodes (roundworms), platyhelminthes (flatworms), marine nemertean worms ("bootlace worms"), marine Chaetognatha (arrow worms), priapulid worms, and insect larvae such as grubs and maggots.
Worms may also be called helminths—particularly in medical terminology—when referring to parasitic worms, especially the Nematoda (roundworms) and Cestoda (tapeworms) which reside in the intestines of their host. When an animal or human is said to "have worms", it means that it is infested with parasitic worms, typically roundworms or tapeworms. Lungworm is also a common parasitic worm found in various animal species such as fish and cats.
History
In taxonomy, "worm" refers to an obsolete grouping, Vermes, used by Carl Linnaeus and Jean-Baptiste Lamarck for all non-arthropod invertebrate animals, now seen to be polyphyletic. In 1758, Linnaeus created the first hierarchical
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Polychaetes make up a large and diverse group. which category do the majority of these worms fall into?
A. marine
B. carnivorous
C. terrestrial
D. amphibian
Answer:
|
|
sciq-2756
|
multiple_choice
|
What common ailment is typically caused by tense muscles in the shoulders, head and neck?
|
[
"fever",
"pollution",
"headache",
"disturbances"
] |
C
|
Relavent Documents:
Document 0:::
Cluster headache (CH) is a neurological disorder characterized by recurrent severe headaches on one side of the head, typically around the eye(s). There is often accompanying eye watering, nasal congestion, or swelling around the eye on the affected side. These symptoms typically last 15 minutes to 3 hours. Attacks often occur in clusters which typically last for weeks or months and occasionally more than a year.
The cause is unknown. Risk factors include a history of exposure to tobacco smoke and a family history of the condition. Exposures which may trigger attacks include alcohol, nitroglycerin, and histamine. They are a primary headache disorder of the trigeminal autonomic cephalalgias type. Diagnosis is based on symptoms.
Recommended management includes lifestyle adaptations such as avoiding potential triggers. Treatments for acute attacks include oxygen or a fast-acting triptan. Measures recommended to decrease the frequency of attacks include steroid injections, civamide, or verapamil. Nerve stimulation or surgery may occasionally be used if other measures are not effective.
The condition affects about 0.1% of the general population at some point in their life and 0.05% in any given year. The condition usually first occurs between 20 and 40 years of age. Men are affected about four times more often than women. Cluster headaches are named for the occurrence of groups of headache attacks (clusters). They have also been referred to as "suicide headaches".
Signs and symptoms
Cluster headaches are recurring bouts of severe unilateral headache attacks. The duration of a typical CH attack ranges from about 15 to 180 minutes. About 75% of untreated attacks last less than 60 minutes. However, women may have longer and more severe CH.
The onset of an attack is rapid and typically without an aura. Preliminary sensations of pain in the general area of attack, referred to as "shadows", may signal an imminent CH, or these symptoms may linger after an attack has passed
Document 1:::
The posterior triangle (or lateral cervical region) is a region of the neck.
Boundaries
The posterior triangle has the following boundaries:
Apex: Union of the sternocleidomastoid and the trapezius muscles at the superior nuchal line of the occipital bone
Anteriorly: Posterior border of the sternocleidomastoideus
Posteriorly: Anterior border of the trapezius
Inferiorly: Middle one third of the clavicle
Roof: Investing layer of the deep cervical fascia
Floor: (From superior to inferior)
1) M. semispinalis capitis
2) M. splenius capitis
3) M. levator scapulae
4) M. scalenus posterior
5) M. scalenus medius
Divisions
The posterior triangle is crossed, about 2.5 cm above the clavicle, by the inferior belly of the omohyoid muscle, which divides the space into two triangles:
an upper or occipital triangle
a lower or subclavian triangle (or supraclavicular triangle)
Contents
A) Nerves and plexuses:
Spinal accessory nerve (Cranial Nerve XI)
Branches of cervical plexus
Roots and trunks of brachial plexus
Phrenic nerve (C3,4,5)
B) Vessels:
Subclavian artery (Third part)
Transverse cervical artery
Suprascapular artery
Terminal part of external jugular vein
C) Lymph nodes:
Occipital
Supraclavicular
D) Muscles:
Inferior belly of omohyoid muscle
Anterior Scalene
Middle Scalene
Posterior Scalene
Levator Scapulae Muscle
Splenius
Clinical significance
The accessory nerve (CN XI) is particularly vulnerable to damage during lymph node biopsy. Damage results in an inability to shrug the shoulders or raise the arm above the head, particularly due to compromised trapezius muscle innervation.
The external jugular vein's superficial location within the posterior triangle also makes it vulnerable to injury.
See also
Anterior triangle of the neck
Document 2:::
Myofascial pain syndrome (MPS), also known as chronic myofascial pain (CMP), is a syndrome characterized by chronic pain in multiple myofascial trigger points ("knots") and fascial (connective tissue) constrictions. It can appear in any body part. Symptoms of a myofascial trigger point include: focal point tenderness, reproduction of pain upon trigger point palpation, hardening of the muscle upon trigger point palpation, pseudo-weakness of the involved muscle, referred pain, and limited range of motion following approximately 5 seconds of sustained trigger point pressure.
The cause is believed to be muscle tension or spasms within the affected musculature. Diagnosis is based on the symptoms and possible sleep studies.
Treatment may include pain medication, physical therapy, mouth guards, and occasionally benzodiazepine. It is a relatively common cause of temporomandibular pain.
Signs and symptoms
Primary symptoms include:
Localized muscle pain
Trigger points that activate the pain (MTrPs)
Generally speaking, the muscular pain is steady, aching, and deep. Depending on the case and location the intensity can range from mild discomfort to excruciating and "lightning-like". Knots may be visible or felt beneath the skin. The pain does not resolve on its own, even after typical first-aid self-care such as ice, heat, and rest. Electromyography (EMG) has been used to identify abnormal motor neuron activity in the affected region.
A physical exam usually reveals palpable trigger points in affected muscles and taut bands corresponding to the contracted muscles. The trigger points are exquisitely tender spots on the taut bands.
Causes
The causes of MPS are not fully documented or understood. At least one study rules out trigger points: "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) ... has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanat
Document 3:::
Neuromuscular medicine is a subspecialty of neurology and physiatry that focuses the diagnosis and management of neuromuscular diseases. The field encompasses issues related to both diagnosis and management of these conditions, including rehabilitation interventions to optimize the quality of life of individuals with these conditions. This field encompasses disorders that impact both adults and children and which can be inherited or acquired, typically from an autoimmune disease. A neurologist or physiatrist can diagnose these diseases through a clinical history, examination, and electromyography including nerve conduction studies. Many recent drug therapies have been developed to address the acquired neuromuscular diseases including but not limited to immune suppression and drugs that increase the neurotransmitters at the neuromuscular junction. Gene modifying therapies are also a recent treatment branch of neuromuscular medicine with advancements made in disorders such as spinal muscular atrophy and Duchenne muscular dystrophy.
See also
List of neuromuscular disorders
Muscle
Motor neuron diseases
Document 4:::
The splenius capitis () () is a broad, straplike muscle in the back of the neck. It pulls on the base of the skull from the vertebrae in the neck and upper thorax. It is involved in movements such as shaking the head.
Structure
It arises from the lower half of the nuchal ligament, from the spinous process of the seventh cervical vertebra, and from the spinous processes of the upper three or four thoracic vertebrae.
The fibers of the muscle are directed upward and laterally and are inserted, under cover of the sternocleidomastoideus, into the mastoid process of the temporal bone, and into the rough surface on the occipital bone just below the lateral third of the superior nuchal line. The splenius capitis is deep to sternocleidomastoideus at the mastoid process, and to the trapezius for its lower portion. It is one of the muscles that forms the floor of the posterior triangle of the neck.
The splenius capitis muscle is innervated by the posterior ramus of spinal nerves C3 and C4.
Function
The splenius capitis muscle is a prime mover for head extension. The splenius capitis can also allow lateral flexion and rotation of the cervical spine.
Additional images
See also
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What common ailment is typically caused by tense muscles in the shoulders, head and neck?
A. fever
B. pollution
C. headache
D. disturbances
Answer:
|
|
sciq-1971
|
multiple_choice
|
Organism's changing over time is called?
|
[
"variation",
"heterogenicity",
"evolution",
"generation"
] |
C
|
Relavent Documents:
Document 0:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 1:::
The theory of facilitated variation demonstrates how seemingly complex biological systems can arise through a limited number of regulatory genetic changes, through the differential re-use of pre-existing developmental components. The theory was presented in 2005 by Marc W. Kirschner (a professor and chair at the Department of Systems Biology, Harvard Medical School) and John C. Gerhart (a professor at the Graduate School, University of California, Berkeley).
The theory of facilitated variation addresses the nature and function of phenotypic variation in evolution. Recent advances in cellular and evolutionary developmental biology shed light on a number of mechanisms for generating novelty. Most anatomical and physiological traits that have evolved since the Cambrian are, according to Kirschner and Gerhart, the result of regulatory changes in the usage of various conserved core components that function in development and physiology. Novel traits arise as novel packages of modular core components, which requires modest genetic change in regulatory elements. The modularity and adaptability of developmental systems reduces the number of regulatory changes needed to generate adaptive phenotypic variation, increases the probability that genetic mutation will be viable, and allows organisms to respond flexibly to novel environments. In this manner, the conserved core processes facilitate the generation of adaptive phenotypic variation, which natural selection subsequently propagates.
Description of the theory
The theory of facilitated variation consists of several elements. Organisms are built from a set of highly conserved modules called "core processes" that function in development and physiology, and have remained largely unchanged for millions (in some instances billions) of years. Genetic mutation leads to regulatory changes in the package of core components (i.e. new combinations, amounts, and functional states of those components) exhibited by an organism. Finall
Document 2:::
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce
Document 3:::
Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment.
Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy.
Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex.
Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances.
For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term.
Document 4:::
Phenotypic plasticity refers to some of the changes in an organism's behavior, morphology and physiology in response to a unique environment. Fundamental to the way in which organisms cope with environmental variation, phenotypic plasticity encompasses all types of environmentally induced changes (e.g. morphological, physiological, behavioural, phenological) that may or may not be permanent throughout an individual's lifespan.
The term was originally used to describe developmental effects on morphological characters, but is now more broadly used to describe all phenotypic responses to environmental change, such as acclimation (acclimatization), as well as learning. The special case when differences in environment induce discrete phenotypes is termed polyphenism.
Generally, phenotypic plasticity is more important for immobile organisms (e.g. plants) than mobile organisms (e.g. most animals), as mobile organisms can often move away from unfavourable environments. Nevertheless, mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype.
One mobile organism with substantial phenotypic plasticity is Acyrthosiphon pisum of the aphid family, which exhibits the ability to interchange between asexual and sexual reproduction, as well as growing wings between generations when plants become too populated.
Water fleas (Daphnia magna) have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer, urban pond waters.
Examples
Plants
Phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage, the allocation of more resources to the roots in soils that contain low concentrations of nutrients, the size of the seeds an individual produces depending on the environment, and the alteration of leaf shape, size, and thickness. Leaves are particularly plastic, and their growth may be altered by light levels. Leaves grown in the light ten
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Organism's changing over time is called?
A. variation
B. heterogenicity
C. evolution
D. generation
Answer:
|
|
sciq-2006
|
multiple_choice
|
Which theory explains how populations of organisms can change over time?
|
[
"intelligent selection",
"changes by natural selection",
"free by natural selection",
"evolution by natural selection"
] |
D
|
Relavent Documents:
Document 0:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
Document 1:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 2:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 3:::
The Altenberg Workshops in Theoretical Biology are expert meetings focused on a key issue of biological theory, hosted by the Konrad Lorenz Institute for Evolution and Cognition Research (KLI) since 1996. The workshops are organized by leading experts in their field, who invite a group of international top level scientists as participants for a 3-day working meeting in the Lorenz Mansion at Altenberg near Vienna, Austria. By this procedure the KLI intends to generate new conceptual advances and research initiatives in the biosciences, which, due to their explicit interdisciplinary nature, are attractive to a wide variety of scientists from practically all fields of biology and the neighboring disciplines.
Workshops and their topics
Cultural Niche Construction. Organized by Kevin Laland and Mike O´Brien. September 2011
Strategic Interaction in Humans and Other Animals. Organized by Simon Huttegger and Brain Skyrms. September 2011
The Meaning of "Theory" in Biology. Organized by Massimo Pigliucci, Kim Sterelny, and Werner Callebaut. June 2011
Biological and Physical Constraints on the Evolution of Form in Plants and Animals. Organized by Jeffrey H. Schwartz and Bruno Maresca. September 2010
Scaffolding in Evolution, Culture, and Cognition. Organized by Linnda Caporael, James Griesemer, and William Wimsatt. July 2010
Models of Man for Evolutionary Economics. Organized by Werner Callebaut, Christophe Heintz, and Luigi Marengo. September 2009
Human EvoDevo: The Role of Development in Human Evolution. Organized by Philipp Gunz and Philipp Mitteroecker. September 2009
Origins of EvoDevo - A tribute to Pere Alberch. Organized by Gerd B. Müller and Diego Rasskin-Gutman. September 2008
Measuring Biology - Quantitative Methods: Past and Future. Organized by Fred L. Bookstein and Katrin Schäfer. September 2008
Toward an Extended Evolutionary Synthesis Organized by Massimo Pigliucci and Gerd B. Müller. July 2008
Innovation in Cultural Systems - Contributions from Evolutionary A
Document 4:::
This is a list of topics in evolutionary biology.
A
abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – : anagenesis – anti-predator adaptation – applications of evolution – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism
B
Henry Walter Bates – biological organisation – Brassica oleracea – breed
C
Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference
D
Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse
E
E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance
Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf
evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evol
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which theory explains how populations of organisms can change over time?
A. intelligent selection
B. changes by natural selection
C. free by natural selection
D. evolution by natural selection
Answer:
|
|
sciq-10366
|
multiple_choice
|
Where are the taste buds located in humans?
|
[
"mouth",
"teeth",
"tongue",
"lips"
] |
C
|
Relavent Documents:
Document 0:::
Taste buds are clusters of taste receptor cells, which are also known as gustatory cells. The taste receptors are located around the small structures known as papillae found on the upper surface of the tongue, soft palate, upper esophagus, the cheek, and epiglottis. These structures are involved in detecting the five elements of taste perception: saltiness, sourness, bitterness, sweetness and savoriness (umami). A popular myth assigns these different tastes to different regions of the tongue; in fact, these tastes can be detected by any area of the tongue. Via small openings in the tongue epithelium, called taste pores, parts of the food dissolved in saliva come into contact with the taste receptors. These are located on top of the taste receptor cells that constitute the taste buds. The taste receptor cells send information detected by clusters of various receptors and ion channels to the gustatory areas of the brain via the seventh, ninth and tenth cranial nerves.
On average, the human tongue has 2,000-8,000 taste buds. The average lifespan of these is estimated to be 10 days.
Types of papillae
The taste buds on the tongue sit on raised protrusions of the tongue surface called papillae. There are four types of lingual papillae; all except one contain taste buds:
Fungiform papillae - as the name suggests, these are slightly mushroom-shaped if looked at in longitudinal section. These are present mostly at the dorsal surface of the tongue, as well as at the sides. Innervated by facial nerve.
Foliate papillae - these are ridges and grooves towards the posterior part of the tongue found at the lateral borders. Innervated by facial nerve (anterior papillae) and glossopharyngeal nerve (posterior papillae).
Circumvallate papillae - there are only about 10 to 14 of these papillae on most people, and they are present at the back of the oral part of the tongue. They are arranged in a circular-shaped row just in front of the sulcus terminalis of the tongue. They are ass
Document 1:::
A taste receptor or tastant is a type of cellular receptor which facilitates the sensation of taste. When food or other substances enter the mouth, molecules interact with saliva and are bound to taste receptors in the oral cavity and other locations. Molecules which give a sensation of taste are considered "sapid".
Vertebrate taste receptors are divided into two families:
Type 1, sweet, first characterized in 2001: –
Type 2, bitter, first characterized in 2000: In humans there are 25 known different bitter receptors, in cats there are 12, in chickens there are three, and in mice there are 35 known different bitter receptors.
Visual, olfactive, "sapictive" (the perception of tastes), trigeminal (hot, cool), mechanical, all contribute to the perception of taste. Of these, transient receptor potential cation channel subfamily V member 1 (TRPV1) vanilloid receptors are responsible for the perception of heat from some molecules such as capsaicin, and a CMR1 receptor is responsible for the perception of cold from molecules such as menthol, eucalyptol, and icilin.
Tissue distribution
The gustatory system consists of taste receptor cells in taste buds. Taste buds, in turn, are contained in structures called papillae. There are three types of papillae involved in taste: fungiform papillae, foliate papillae, and circumvallate papillae. (The fourth type - filiform papillae do not contain taste buds). Beyond the papillae, taste receptors are also in the palate and early parts of the digestive system like the larynx and upper esophagus. There are three cranial nerves that innervate the tongue; the vagus nerve, glossopharyngeal nerve, and the facial nerve. The glossopharyngeal nerve and the chorda tympani branch of the facial nerve innervate the TAS1R and TAS2R taste receptors. Next to the taste receptors in on the tongue, the gut epithelium is also equipped with a subtle chemosensory system that communicates the sensory information to several effector systems involved
Document 2:::
The primary gustatory cortex (GC) is a brain structure responsible for the perception of taste. It consists of two substructures: the anterior insula on the insular lobe and the frontal operculum on the inferior frontal gyrus of the frontal lobe. Because of its composition the primary gustatory cortex is sometimes referred to in literature as the AI/FO(Anterior Insula/Frontal Operculum). By using extracellular unit recording techniques, scientists have elucidated that neurons in the AI/FO respond to sweetness, saltiness, bitterness, and sourness, and they code the intensity of the taste stimulus.
Role in the taste pathway
Like the olfactory system, the taste system is defined by its specialized peripheral receptors and central pathways that relay and process taste information. Peripheral taste receptors are found on the upper surface of the tongue, soft palate, pharynx, and the upper part of the esophagus. Taste cells synapse with primary sensory axons that run in the chorda tympani and greater superficial petrosal branches of the facial nerve (cranial nerve VII), the lingual branch of the glossopharyngeal nerve (cranial nerve IX), and the superior laryngeal branch of the vagus nerve (Cranial nerve X) to innervate the taste buds in the tongue, palate, epiglottis, and esophagus respectively. The central axons of these primary sensory neurons in the respective cranial nerve ganglia project to rostral and lateral regions of the nucleus of the solitary tract in the medulla, which is also known as the gustatory nucleus of the solitary tract complex. Axons from the rostral (gustatory) part of the solitary nucleus project to the ventral posterior complex of the thalamus, where they terminate in the medial half of the ventral posterior medial nucleus. This nucleus projects in turn to several regions of the neocortex which includes the gustatory cortex (the frontal operculum and the insula), which becomes activated when the subject is consuming and experiencing t
Document 3:::
The gustatory system or sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). Taste is the perception stimulated when a substance in the mouth reacts chemically with taste receptor cells located on taste buds in the oral cavity, mostly on the tongue. Taste, along with the sense of smell and trigeminal nerve stimulation (registering texture, pain, and temperature), determines flavors of food and other substances. Humans have taste receptors on taste buds and other areas, including the upper surface of the tongue and the epiglottis. The gustatory cortex is responsible for the perception of taste.
The tongue is covered with thousands of small bumps called papillae, which are visible to the naked eye. Within each papilla are hundreds of taste buds. The exception to this is the filiform papillae that do not contain taste buds. There are between 2000 and 5000 taste buds that are located on the back and front of the tongue. Others are located on the roof, sides and back of the mouth, and in the throat. Each taste bud contains 50 to 100 taste receptor cells.
Taste receptors in the mouth sense the five basic tastes: sweetness, sourness, saltiness, bitterness, and savoriness (also known as savory or umami). Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to tell different tastes apart when they interact with different molecules or ions. Sweetness, savoriness, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metals or hydrogen ions meet taste buds, respectively.
The basic tastes contribute only partially to the sensation and flavor of food in the mouth—other factors include smell, detected by the olfactory epithelium of the nose; texture, detected through a variety of mechanoreceptors, muscle nerves, etc.; temperature, det
Document 4:::
The tongue map or taste map is a common misconception that different sections of the tongue are exclusively responsible for different basic tastes. It is illustrated with a schematic map of the tongue, with certain parts of the tongue labeled for each taste. Although widely taught in schools, this is incorrect; all taste sensations come from all regions of the tongue, although certain parts are more sensitive to certain tastes.
History
The theory behind this map originated from a paper written by Harvard psychologist Dirk P. Hänig, which was a translation of a German paper, Zur Psychophysik des Geschmackssinnes, which was written in 1901. The unclear representation of data in the earlier paper suggested that each part of the tongue tastes exactly one basic taste.
The paper showed minute differences in threshold detection levels across the tongue, but these differences were later taken out of context and the minute difference in threshold sensitivity was misconstrued in textbooks as a difference in sensation.
While some parts of the tongue may be able to detect a taste before the others do, all parts are equally capable of conveying the qualia of all tastes. Threshold sensitivity may differ across the tongue, but intensity of sensation does not.
The same paper included a taste bud distribution diagram that showed a "taste belt".
In 1974, Virginia Collings investigated the topic again, and confirmed that all the tastes exist on all parts of the tongue.
Into the late 1990's tongue map experiments were a teaching tool in high school biology classes. Students were given strips of paper with different tastes on them and told where each sweet/salty/etc. taste should be more noticeable. They then were instructed to touch those taste strips on different areas of their lab partner's tongue and record the (proper) sensation result.
Taste belt
The misinterpreted diagram that sparked this myth shows human taste buds distributed in a "taste belt" along the inside of th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where are the taste buds located in humans?
A. mouth
B. teeth
C. tongue
D. lips
Answer:
|
|
sciq-911
|
multiple_choice
|
What do fungi use to penetrate deep into decaying matter?
|
[
"flagella",
"hyphae",
"cilia",
"cytoplasm"
] |
B
|
Relavent Documents:
Document 0:::
Mold control and prevention is a conservation activity that is performed in libraries and archives to protect books, documents and other materials from deterioration caused by mold growth. Mold prevention consists of different methods, such as chemical treatments, careful environmental control, and manual
cleaning. Preservationists use one or a combination of these methods to combat mold spores in library and archival collections.
Due to the resilient nature of mold and its potential for damage to library collections, mold prevention has become an important activity among preservation librarians. Although mold is naturally present in both indoor and outdoor environments, under the right circumstances it can become active after being in a dormant state. Mold growth responds to increased moisture, high humidity, and warm temperatures. Library collections are particularly vulnerable to mold since mold thrives off of organic, cellulose-based materials such as paper, wood, and textiles made of natural fibers. Changes in the moisture in the atmosphere can lead to mold growth and irreparable damage to library collections.
Mold
Mold is a generic term for a specific type of fungi. Mildew may also refer to types of mold. Since there are so many species of mold, their appearance varies in color and growth habit. In general, active mold has a musty odor and appears fuzzy, slimy, or damp. Inactive mold looks dry and powdery.
Mold propagates via spores, which are always present in the environment. Mold spores can be transferred to an object by mechanical instruments or air circulation. When spores attach to another organism, and the environment is favorable, they begin to germinate. Mold produce mycelium which growth pattern resembles cobwebs. Mycelium allows the mold to obtain food and nutrients through the host. Inevitably, the mycelium produces spore sacs and release new spores into the air. Eventually the spores land on new material, and the reproductive cycle begins aga
Document 1:::
Mycelium, the fungal equivalent of roots in plants, has been identified as an ecologically friendly substitute to a litany of materials throughout different industries, including but not limited to packaging, fashion and building materials. Such substitutes present a biodegradable alternative (also known as a "Living Building Material") to conventional materials.
Mycelium was most notably first examined as an ecologically friendly material alternative in 2007. It was widely popularized by Eben Bayer and Gavin McIntyre, through their work developing mycelium packaging and founding their company, Ecovative. during their time at Rensselaer Polytechnic Institute. Since its inception, the material function has diversified into many niches.
Species and biological structures
Mycelium-based composites require a fungus and substrate. “Mycelium” is a term referring to the network of branching fibers, called hyphae, that are created by a fungus to grow and feed. When introduced to a substrate, the fungi will penetrate using their mycelium network, which then breaks down the substrate into basic nutrients for the fungi. By this method, the fungi can grow. For mycelium-based composites, the substrate is not fully broken down during this process and is instead kept intertwined with the mycelium.
The main components of fungi are chitin, polysaccharides, lipids, and proteins. Different compositional amounts of these molecules change the properties of the composites. This is also true for different substrates. Substrates that have higher amounts of chitin and are harder for the mycelium to break down and lead to a stiffer composite formation.
Commonly used species of fungi to grow mycelium are aerobic basidiomycetes, which include Ganoderma sp., Pleurotus sp., and Trametes sp. Basidiomycetes have favorable properties as fungi for creating mycelium based composites because they grow at a relatively steady and quick pace, and can use many different types of organic waste as subs
Document 2:::
Mycelial cords are linear aggregations of parallel-oriented hyphae. The mature cords are composed of wide, empty vessel hyphae surrounded by narrower sheathing hyphae. Cords may look similar to plant roots, and also frequently have similar functions; hence they are also called rhizomorphs (literally, "root-forms"). As well as growing underground or on the surface of trees and other plants, some fungi make mycelial cords which hang in the air from vegetation.
Mycelial cords are capable of conducting nutrients over long distances. For instance, they can transfer nutrients to a developing fruiting body, or enable wood-rotting fungi to grow through soil from an established food base in search of new food sources. For parasitic fungi, they can help spread infection by growing from established clusters to uninfected parts. The cords of some wood-rotting fungi (like Serpula lacrymans) may be capable of penetrating masonry.
The mechanism of the cord formation is not yet precisely understood. Mathematical models suggest that some fields or gradients of signalling chemicals, parallel to the cord axis, may be involved.
Rhizomorphs can grow up to in length and in diameter.
Rhizomorph
Rhizomorphs are a special morphological adaptation root-like structures found in fungi. These root-like structures are composed of parallel-oriented hyphae that can be found in several species of wood-decay and ectomycorrhizal basidiomycete as well as ascomycete fungi. Rhizomorphs can facilitate the colonization of some dry-rot fungi such as Serpula lacrymans and Meruliporia incrassata and cause damage to homes in Europe and North America, respectively, by decaying wood. Another genus that is very well studied for their abundance of rhizomorphs production is Armillaria, with some species being pathogens and others saprotrophs of trees and shrubs.
Known for their role in facilitating the spread and colonization of fungi in the environment, rhizomorphs are the most complex organs produced b
Document 3:::
Entangled Life: How fungi make our worlds, change our minds and shape our futures is a 2020 non-fiction book on mycology by British biologist Merlin Sheldrake. His first book, it was published by Random House on 12 May 2020.
Summary
The book looks at fungi from a number of angles, including decomposition, fermentation, nutrient distribution, psilocybin production, the evolutionary role fungi play in plants, and the ways in which humans relate to the fungal kingdom. It uses music and philosophy to illustrate its thesis, and introduces readers to a number of central strands of research on mycology. It is also a personal account of Sheldrake's experiences with fungi.
Sheldrake is an expert in mycorrhizal fungi, holds a PhD in tropical ecology from the University of Cambridge for his work on underground fungal networks in tropical forests in Panama, where he was a predoctoral research fellow of the Smithsonian Tropical Research Institute, and his research is primarily in the fields of fungal biology and the history of Amazonian ethnobotany. He is the son of Rupert Sheldrake, a biologist, and Jill Purce, an author and therapist, and the brother of musician Cosmo Sheldrake.
Reception
The book was published to largely positive reviews. Jennifer Szalai of The New York Times called the book an "ebullient and ambitious exploration" of fungi, adding, "reading it left me not just moved but altered, eager to disseminate its message of what fungi can do." Eugenia Bone of The Wall Street Journal called it "a gorgeous book of literary nature writing in the tradition of [Robert] Macfarlane and John Fowles, ripe with insight and erudition." Rachel Cooke of The Observer called it "an astonishing book that could alter our perceptions of fungi forever." Richard Kerridge, reviewing the book in The Guardian, wrote that "when we look closely [at fungi], we meet large, unsettling questions... [Sheldrake] carries us easily into these questions with ebullience and precision."
The book was
Document 4:::
A fungus (: fungi or funguses) is any member of the group of eukaryotic organisms that includes microorganisms such as yeasts and molds, as well as the more familiar mushrooms. These organisms are classified as one of the traditional eukaryotic kingdoms, along with Animalia, Plantae and either Protista or Protozoa and Chromista.
A characteristic that places fungi in a different kingdom from plants, bacteria, and some protists is chitin in their cell walls. Fungi, like animals, are heterotrophs; they acquire their food by absorbing dissolved molecules, typically by secreting digestive enzymes into their environment. Fungi do not photosynthesize. Growth is their means of mobility, except for spores (a few of which are flagellated), which may travel through the air or water. Fungi are the principal decomposers in ecological systems. These and other differences place fungi in a single group of related organisms, named the Eumycota (true fungi or Eumycetes), that share a common ancestor (i.e. they form a monophyletic group), an interpretation that is also strongly supported by molecular phylogenetics. This fungal group is distinct from the structurally similar myxomycetes (slime molds) and oomycetes (water molds). The discipline of biology devoted to the study of fungi is known as mycology (from the Greek , mushroom). In the past mycology was regarded as a branch of botany, although it is now known that fungi are genetically more closely related to animals than to plants.
Abundant worldwide, most fungi are inconspicuous because of the small size of their structures, and their cryptic lifestyles in soil or on dead matter. Fungi include symbionts of plants, animals, or other fungi and also parasites. They may become noticeable when fruiting, either as mushrooms or as molds. Fungi perform an essential role in the decomposition of organic matter and have fundamental roles in nutrient cycling and exchange in the environment. They have long been used as a direct source of h
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do fungi use to penetrate deep into decaying matter?
A. flagella
B. hyphae
C. cilia
D. cytoplasm
Answer:
|
|
ai2_arc-323
|
multiple_choice
|
Which of the following is an acquired human characteristic?
|
[
"Eye color",
"Hair color",
"Height",
"Verbal accent"
] |
D
|
Relavent Documents:
Document 0:::
Research on the heritability of IQ inquires into the degree of variation in IQ within a population that is due to genetic variation between individuals in that population. There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century. Intelligence in the normal range is a polygenic trait, meaning that it is influenced by more than one gene, and in the case of intelligence at least 500 genes. Further, explaining the similarity in IQ of closely related persons requires careful study because environmental factors may be correlated with genetic factors.
Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%, with some recent studies showing heritability for IQ as high as 80%. IQ goes from being weakly correlated with genetics for children, to being strongly correlated with genetics for late teens and adults. The heritability of IQ increases with the child's age and reaches a plateau at 14-16 years old, continuing at that level well into adulthood. However, poor prenatal environment, malnutrition and disease are known to have lifelong deleterious effects.
Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.
Heritability and caveats
Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?"
Estimates of heritabi
Document 1:::
Evolutionary educational psychology is the study of the relation between inherent folk knowledge and abilities and accompanying inferential and attributional biases as these influence academic learning in evolutionarily novel cultural contexts, such as schools and the industrial workplace. The fundamental premises and principles of this discipline are presented below.
Premises
The premises of evolutionary educational psychology state there are:
(a) aspects of mind and brain that have evolved to draw the individuals’ attention to and facilitate the processing of social (folk psychology), biological (folk biology), physical (folk physics) information patterns that facilitated survival or reproductive outcomes during human evolution (Cosmides & Tooby, 1994; Geary, 2005; Gelman, 1990; Pinker, 1997; Shepard, 1994; Simon, 1956);
(b) although plastic to some degree, these primary abilities are inherently constrained to the extent associated information patterns tended to be consistent across generations and within lifetimes (e.g., Caramazza & Shelton, 1998; Geary & Huffman, 2002);
(c) other aspects of mind and brain evolved to enable the mental generation of potential future social, ecological, or climatic conditions and enable rehearsal of behaviors to cope with variation in these conditions, and are now known as general fluid intelligence, or gF (including skill at everyday reasoning/problem solving; Chiappe & MacDonald, 2005; Geary, 2005; Mithen, 1996); and
(d) children are inherently motivated to learn in folk domains, with the associated attentional and behavioral biases resulting in experiences that automatically and implicitly flesh out and adapt these systems to local conditions (Gelman, 1990; Gelman & Williams, 1998; Gelman, 2003).
Principles
The principles of evolutionary educational psychology represent the foundational assumptions for an evolutionary educational psychology. The gist is knowledge and expertise that is useful in the cultural milieu or ecolo
Document 2:::
An acquired characteristic is a non-heritable change in a function or structure of a living organism caused after birth by disease, injury, accident, deliberate modification, variation, repeated use, disuse, misuse, or other environmental influence. Acquired traits are synonymous with acquired characteristics. They are not passed on to offspring through reproduction.
The changes that constitute acquired characteristics can have many manifestations and degrees of visibility, but they all have one thing in common. They change a facet of a living organism's function or structure after birth.
For example:
The muscles acquired by a bodybuilder through physical training and diet.
The loss of a limb due to an injury.
The miniaturization of bonsai plants through careful cultivation techniques.
Acquired characteristics can be minor and temporary like bruises, blisters, or shaving body hair. Permanent but inconspicuous or invisible ones are corrective eye surgery and organ transplant or removal.
Semi-permanent but inconspicuous or invisible traits are vaccination and laser hair removal. Perms, tattoos, scars, and amputations are semi-permanent and highly visible.
Applying makeup, nail polish, dying one's hair, applying henna to the skin, and tooth whitening are not examples of acquired traits. They change the appearance of a facet of an organism, but do not change the structure or functionality.
Inheritance of acquired characteristics was historically proposed by renowned theorists such as Hippocrates, Aristotle, and French naturalist Jean-Baptiste Lamarck. Conversely, this hypothesis was denounced by other renowned theorists such as Charles Darwin.
Today, although Lamarckism is generally discredited, there is still debate on whether some acquired characteristics in organisms are actually inheritable.
Disputes
Acquired characteristics, by definition, are characteristics that are gained by an organism after birth as a result of external influences or the organism's ow
Document 3:::
The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005).
The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways.
Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated.
Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems).
Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability).
The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics.
Document 4:::
Personality traits are patterns of thoughts, feelings and behaviors that reflect the tendency to respond in certain ways under certain circumstances.
Personality is influenced by genetic and environmental factors and associated with mental health. Beside the environment factor, genetic variants can be detected for personality traits. These traits are polygenic. Significant genetic variants are present for most of the behavioral traits. There is a consistency in detection of genetic variants and genomic association for traits derived from pedigree.
Trait theory
The Big Five personality traits, also known as the five-factor model (FFM) or the OCEAN model, is the prevailing model for personality traits. When factor analysis (a statistical technique) is applied to personality survey data, some words or questionnaire items used to describe aspects of personality are often applied to the same person. For example, someone described as conscientious is more likely to be described as "always prepared" rather than "messy". This theory uses descriptors of common language and therefore suggests five broad dimensions commonly used to describe the human personality and psyche.
The five factors are:
Openness to experience (inventive/curious vs. consistent/cautious)
Conscientiousness (efficient/organized vs. easy-going/careless)
Extraversion (outgoing/energetic vs. solitary/reserved)
Agreeableness (friendly/compassionate vs. challenging/detached)
Neuroticism (sensitive/nervous vs. secure/confident).
Methods
The methods mostly used in genomics of personality traits' studies are two: analytic methods and not-analytic ones (such as questionnaires).
Analytic
Analytical techniques that can be used to measure genomics of personality include:
GWAS, genome wide association study is a method used to define markers (these markers are single nucleotide polymorphism, SNPs) across the genomes in order to better understand the contribution of genetics to personality traits. Since
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following is an acquired human characteristic?
A. Eye color
B. Hair color
C. Height
D. Verbal accent
Answer:
|
|
sciq-6827
|
multiple_choice
|
Ions can be formed when atoms lose what other particles?
|
[
"electrons",
"protons",
"neutrons",
"shells"
] |
A
|
Relavent Documents:
Document 0:::
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of
Document 1:::
An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to
Document 2:::
In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current.
The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign.
In conductors
In conducting media, particles serve to carry charge:
In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes.
In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.
In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.
In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil
Document 3:::
In physics and chemistry, ionization energy (IE) (American English spelling), ionisation energy (British English spelling) is the minimum energy required to remove the most loosely bound electron of an isolated gaseous atom, positive ion, or molecule. The first ionization energy is quantitatively expressed as
X(g) + energy ⟶ X+(g) + e−
where X is any atom or molecule, X+ is the resultant ion when the original atom was stripped of a single electron, and e− is the removed electron. Ionization energy is positive for neutral atoms, meaning that the ionization is an endothermic process. Roughly speaking, the closer the outermost electrons are to the nucleus of the atom, the higher the atom's ionization energy.
In physics, ionization energy is usually expressed in electronvolts (eV) or joules (J). In chemistry, it is expressed as the energy to ionize a mole of atoms or molecules, usually as kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol).
Comparison of ionization energies of atoms in the periodic table reveals two periodic trends which follow the rules of Coulombic attraction:
Ionization energy generally increases from left to right within a given period (that is, row).
Ionization energy generally decreases from top to bottom in a given group (that is, column).
The latter trend results from the outer electron shell being progressively farther from the nucleus, with the addition of one inner shell per row as one moves down the column.
The nth ionization energy refers to the amount of energy required to remove the most loosely bound electron from the species having a positive charge of (n − 1). For example, the first three ionization energies are defined as follows:
1st ionization energy is the energy that enables the reaction X ⟶ X+ + e−
2nd ionization energy is the energy that enables the reaction X+ ⟶ X2+ + e−
3rd ionization energy is the energy that enables the reaction X2+ ⟶ X3+ + e−
The most notable influences that determine ionization ener
Document 4:::
The protonosphere is a layer of the Earth's atmosphere (or any planet with a similar atmosphere) where the dominant components are atomic hydrogen and ionic hydrogen (protons). It is the outer part of the ionosphere, and extends to the interplanetary medium. Hydrogen dominates in the outermost layers because it is the lightest gas, and in the heterosphere, mixing is not strong enough to overcome differences in constituent gas densities. Charged particles are created by incoming ionizing radiation, mostly from solar radiation.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Ions can be formed when atoms lose what other particles?
A. electrons
B. protons
C. neutrons
D. shells
Answer:
|
|
ai2_arc-180
|
multiple_choice
|
Which of these events occurs about every three months?
|
[
"high tide",
"new moon",
"new season",
"solar eclipse"
] |
C
|
Relavent Documents:
Document 0:::
Named meteor showers recur at approximately the same dates each year. They appear to radiate from a certain point in the sky, known as the radiant, and vary in the speed, frequency and brightness of the meteors. As of November 2019, there are 112 established meteor showers.
Table of meteor showers
Dates are given for 2023. The dates will vary from year to year due to the leap year cycle. This list includes showers with radiants in both the northern and southern hemispheres. There is some overlap, but generally showers whose radiants have positive declinations are best seen from the northern hemisphere, and those with negative declinations are best observed from the southern hemisphere.
See also
Lists of astronomical objects
Sources
This list of meteor streams and peak activity times is based on data from the International Meteor Organization while most of the parent body associations are from Gary W. Kronk book, Meteor Showers: A Descriptive Catalog, Enslow Publishers, New Jersey, , and from Peter Jenniskens's book, "Meteor Showers and Their Parent Comets", Cambridge University Press, Cambridge UK, .
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
A transient lunar phenomenon (TLP) or lunar transient phenomenon (LTP) is a short-lived change in light, color or appearance on the surface of the Moon. The term was created by Patrick Moore in his co-authorship of NASA Technical Report R-277 Chronological Catalog of Reported Lunar Events, published in 1968.
Claims of short-lived lunar phenomena go back at least 1,000 years, with some having been observed independently by multiple witnesses or reputable scientists. Nevertheless, the majority of transient lunar phenomenon reports are irreproducible and do not possess adequate control experiments that could be used to distinguish among alternative hypotheses to explain their origins.
Most lunar scientists will acknowledge that transient events such as outgassing and impact cratering do occur over geologic time. The controversy lies in the frequency of such events.
Description of events
Reports of transient lunar phenomena range from foggy patches to permanent changes of the lunar landscape. Cameron classifies these as (1) gaseous, involving mists and other forms of obscuration, (2) reddish colorations, (3) green, blue or violet colorations, (4) brightenings, and (5) darkening. Two extensive catalogs of transient lunar phenomena exist, with the most recent tallying 2,254 events going back to the 6th century. Of the most reliable of these events, at least one-third come from the vicinity of the Aristarchus plateau.
An overview of the more famous historical accounts of transient phenomena include the following:
Pre 1700
On June 18, 1178, five or more monks from Canterbury reported an upheaval on the Moon shortly after sunset: This description appears outlandish, perhaps due to the writer's and viewers' lack of understanding of astronomical phenomena. In 1976, Jack Hartung proposed that this described the formation of the Giordano Bruno crater. However, more recent studies suggest that it appears very unlikely the 1178 event was related to the formation of Crater
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of these events occurs about every three months?
A. high tide
B. new moon
C. new season
D. solar eclipse
Answer:
|
|
sciq-1486
|
multiple_choice
|
What is at the center of our solar system?
|
[
"the sun",
"the Kuiper Belt",
"the moon",
"earth"
] |
A
|
Relavent Documents:
Document 0:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 1:::
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star system. Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disks. The Sun together with the planetary system revolving around it, including Earth, forms the Solar System. The term exoplanetary system is sometimes used in reference to other planetary systems.
Debris disks are also known to be common, though other objects are more difficult to observe.
Of particular interest to astrobiology is the habitable zone of planetary systems where planets could have surface liquid water, and thus the capacity to support Earth-like life.
History
Heliocentrism
Historically, heliocentrism (the doctrine that the Sun is at the centre of the universe) was opposed to geocentrism (placing Earth at the centre of the universe).
The notion of a heliocentric Solar System with the Sun at its centre is possibly first suggested in the Vedic literature of ancient India, which often refer to the Sun as the "centre of spheres". Some interpret Aryabhatta's writings in Āryabhaṭīya as implicitly heliocentric.
The idea was first proposed in Western philosophy and Greek astronomy as early as the 3rd century BC by Aristarchus of Samos, but received no support from most other ancient astronomers.
Discovery of the Solar System
De revolutionibus orbium coelestium by Nicolaus Copernicus, published in 1543, presented the first mathematically predictive heliocentric model of a planetary system. 17th-century successors Galileo Galilei, Johannes Kepler, and Sir Isaac Newton developed an understanding of physics which led to the gradual acceptance of the idea that the Earth moves around the Sun and that the planets are governed by the same physical laws that governed Earth.
Speculation on extrasolar pla
Document 2:::
The Sweden Solar System is the world's largest permanent scale model of the Solar System. The Sun is represented by the Avicii Arena in Stockholm, the second-largest hemispherical building in the world. The inner planets can also be found in Stockholm but the outer planets are situated northward in other cities along the Baltic Sea. The system was started by Nils Brenning, professor at the Royal Institute of Technology in Stockholm, and Gösta Gahm, professor at the Stockholm University. The model represents the Solar System on the scale of 1:20 million.
The system
The bodies represented in this model include the Sun, the planets (and some of their moons), dwarf planets and many types of small bodies (comets, asteroids, trans-Neptunians, etc.), as well as some abstract concepts (like the Termination Shock zone). Because of the existence of many small bodies in the real Solar System, the model can always be further increased.
The Sun is represented by the Avicii Arena (Globen), Stockholm, which is the second-largest hemispherical building in the world, in diameter. To respect the scale, the globe represents the Sun including its corona.
Inner planets
Mercury ( in diameter) is placed at Stockholm City Museum, from the Globe. The small metallic sphere was built by the artist Peter Varhelyi.
Venus ( in diameter) is placed at Vetenskapens Hus at KTH (Royal Institute of Technology), from the Globe. The previous model, made by the United States artist Daniel Oberti, was inaugurated on 8 June 2004, during a Venus transit and placed at KTH. It fell and shattered around 11 June 2011. Due to construction work at the location of the previous model of Venus it was removed and as of October 2012 cannot be seen. The current model now at Vetenskapens Hus was previously located at the Observatory Museum in Stockholm (now closed).
Earth ( in diameter) is located at the Swedish Museum of Natural History (Cosmonova), from the Globe. Satellite images of the Earth are exhibited
Document 3:::
Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments.
Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world.
In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below).
The National Aeronautics and
Document 4:::
Knowledge of the location of Earth has been shaped by 400 years of telescopic observations, and has expanded radically since the start of the 20th century. Initially, Earth was believed to be the center of the Universe,
which consisted only of those planets visible with the naked eye and an outlying sphere of fixed stars. After the acceptance of the heliocentric model in the 17th century, observations by William Herschel and others showed that the Sun lay within a vast, disc-shaped galaxy of stars. By the 20th century, observations of spiral nebulae revealed that the Milky Way galaxy was one of billions in an expanding universe, grouped into clusters and superclusters. By the end of the 20th century, the overall structure of the visible universe was becoming clearer, with superclusters forming into a vast web of filaments and voids. Superclusters, filaments and voids are the largest coherent structures in the Universe that we can observe. At still larger scales (over 1000 megaparsecs) the Universe becomes homogeneous, meaning that all its parts have on average the same density, composition and structure.
Since there is believed to be no "center" or "edge" of the Universe, there is no particular reference point with which to plot the overall location of the Earth in the universe. Because the observable universe is defined as that region of the Universe visible to terrestrial observers, Earth is, because of the constancy of the speed of light, the center of Earth's observable universe. Reference can be made to the Earth's position with respect to specific structures, which exist at various scales. It is still undetermined whether the Universe is infinite. There have been numerous hypotheses that the known universe may be only one such example within a higher multiverse; however, no direct evidence of any sort of multiverse has been observed, and some have argued that the hypothesis is not falsifiable.
Details
Earth is the third planet from the Sun with an approximat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is at the center of our solar system?
A. the sun
B. the Kuiper Belt
C. the moon
D. earth
Answer:
|
|
sciq-1171
|
multiple_choice
|
Bacterial stis include chlamydia, gonorrhea, and syphilis are diseases that can usually be cured with what?
|
[
"antivirals",
"antioxidants",
"antihistamines",
"antibiotics"
] |
D
|
Relavent Documents:
Document 0:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
Document 1:::
Infectious diseases or ID, also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates the cause of a disease to determine what kind of Bacteria, viruses, parasites, or fungi the disease is caused by. Once the pathogen is known, an ID specialist can then run various tests to determine the best antimicrobial drug to kill the pathogen and treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines.
Scope
Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin.
Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS.
History
Inf
Document 2:::
Those involved in the care of athletes should be alert to the possibility of getting an infectious disease for the following reasons:
There is the chance, or even the expectation, of contact or collision with another player, or the playing surface, which may be a mat or artificial turf.
The opportunities for skin breaks, obvious or subtle, are present and compromise skin defenses.
Young people congregate in dormitories, locker rooms, showers, etc.
There is the possibility of sharing personal toilet articles.
Equipment, gloves and pads and protective gear, is difficult to sanitize and can become contaminated.
However, in many cases, the chance of infection can be reduced by relatively simple measures.
Herpes gladiatorum
Wrestlers use mats which are abrasive and the potential for a true contagion (Latin contagion-, contagio, from contingere to have contact with) is very real. The herpes simplex virus, type I, is very infectious and large outbreaks have been documented. A major epidemic threatened the 2007 Minnesota high school wrestling season, but was largely contained by instituting an eight-day isolation period during which time competition was suspended. Practices, such as 'weight cutting', which can at least theoretically reduce immunity, might potentiate the risk. In non-epidemic circumstances, herpes gladiatorum affects about 3% of high school wrestlers and 8% of collegiate wrestlers. There is the potential for prevention of infection, or at least containment, with antiviral agents which are effective in reducing the spread to other athletes when given to those who are herpes positive, or who have recurrent herpes gladiatorum.
The NCAA specifies that a wrestler must:
- be free of systemic symptoms (fever, malaise, etc.).
- have developed no new blisters for 72 hours before the examination.
- have no moist lesions; all lesions must be dried and have progressed to a FIRM ADHERENT CRUST.
- have been on appropriate systemic antiviral therapy for at lea
Document 3:::
Risk of infection is a nursing diagnosis which is defined as "the state in which an individual is at risk to be invaded by an opportunistic or pathogenic agent (virus, fungus, bacteria, protozoa, or other parasite) from endogenous or exogenous sources" and was approved by NANDA in 1986. Although anyone can become infected by a pathogen, patients with this diagnosis are at an elevated risk and extra infection controls should be considered.
Endogenous sources
The risk of infection depends on a number of endogenous sources.
Skin damage from incision as well as very young or old age can increase a patient's risk of infection. Examples of risk factors includes decreased immune system secondary to disease, compromised circulation secondary to peripheral vascular disease, compromised skin integrity secondary to surgery, or repeated contact with contagious agents.
Assessment
The patient should be asked about a history of repeated infections, symptoms of infection, recent travel to high-risk areas, and their immunization history. They should also be assessed for objective signs such as the presence of wounds, fever, or signs of nutritional deficiency
Intervention
The specific nursing interventions will depend on the nature and severity of the risk. Patients should be taught how to recognize the signs of infection and how to reduce their risk. Surgery is a frequent risk factor for infection and a physician may prescribe antibiotics prophylactically. Immunization is another common medical intervention for those who are at high risk for infection.
Hand washing is the best way to break the chain of infection.
Document 4:::
An infection rate (or incident rate) is the probability or risk of an infection in a population. It is used to measure the frequency of occurrence of new instances of infection within a population during a specific time period.
The number of infections equals the cases identified in the study or observed. An example would be HIV infection during a specific time period in the defined population. The population at risk are the cases appearing in the population during the same time period. An example would be all the people in a city during a specific time period. The constant, or K is assigned a value of 100 to represent a percentage. An example would be to find the percentage of people in a city who are infected with HIV: 6,000 cases in March divided by the population of a city (one million) multiplied by the constant (K) would give an infection rate of 0.6%.
Calculating the infection rate is used to analyze trends for the purpose of infection and disease control. An online infection rate calculator has been developed by the Centers for Disease Control and Prevention that allows the determination of the Streptococcal A infection rate in a population.
Clinical applications
Health care facilities routinely track their infection rates according to the guidelines issued by the Joint Commission. The healthcare-associated infection (HAI) rates measure infection of patients in a particular hospital. This allows rates to compared with other hospitals. These infections can often be prevented when healthcare facilities follow guidelines for safe care. To get payment from Medicare, hospitals are required to report data about some infections to the Centers for Disease Control and Prevention's (CDC's) National Healthcare Safety Network (NHSN). Hospitals currently submit information on central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), surgical site infections (SSIs), MRSA Bacteremia, and C. difficile laboratory-i
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Bacterial stis include chlamydia, gonorrhea, and syphilis are diseases that can usually be cured with what?
A. antivirals
B. antioxidants
C. antihistamines
D. antibiotics
Answer:
|
|
sciq-2964
|
multiple_choice
|
What is the term for behaviors that are closely controlled by genes with little or no environmental influence?
|
[
"learned behavior",
"innate behaviors",
"reflex behaviors",
"observational behaviors"
] |
B
|
Relavent Documents:
Document 0:::
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary.
Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector.
Models
Biology
Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli".
A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny).
Behaviors can be either innate or learned from the environment.
Behavior can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment.
Human behavior
The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior.
Animal behavior
Ethology is the scientifi
Document 1:::
Observational learning is learning that occurs through observing the behavior of others. It is a form of social learning which takes various forms, based on various processes. In humans, this form of learning seems to not need reinforcement to occur, but instead, requires a social model such as a parent, sibling, friend, or teacher with surroundings. Particularly in childhood, a model is someone of authority or higher status in an environment. In animals, observational learning is often based on classical conditioning, in which an instinctive behavior is elicited by observing the behavior of another (e.g. mobbing in birds), but other processes may be involved as well.
Human observational learning
Many behaviors that a learner observes, remembers, and imitates are actions that models display and display modeling, even though the model may not intentionally try to instill a particular behavior. A child may learn to swear, smack, smoke, and deem other inappropriate behavior acceptable through poor modeling. Albert Bandura claims that children continually learn desirable and undesirable behavior through observational learning. Observational learning suggests that an individual's environment, cognition, and behavior all incorporate and ultimately determine how the individual functions and models.
Through observational learning, individual behaviors can spread across a culture through a process called diffusion chain. This basically occurs when an individual first learns a behavior by observing another individual and that individual serves as a model through whom other individuals learn the behavior, and so on.
Culture plays a role in whether observational learning is the dominant learning style in a person or community. Some cultures expect children to actively participate in their communities and are therefore exposed to different trades and roles on a daily basis. This exposure allows children to observe and learn the different skills and practices that are valued i
Document 2:::
Instinct is the inherent inclination of a living organism towards a particular complex behaviour, containing innate (inborn) elements. The simplest example of an instinctive behaviour is a fixed action pattern (FAP), in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus.
Any behaviour is instinctive if it is performed without being based upon prior experience (that is, in the absence of learning), and is therefore an expression of innate biological factors. Sea turtles, newly hatched on a beach, will instinctively move toward the ocean. A marsupial climbs into its mother's pouch upon being born. Other examples include animal fighting, animal courtship behaviour, internal escape functions, and the building of nests. Though an instinct is defined by its invariant innate characteristics, details of its performance can be changed by experience; for example, a dog can improve its listening skills by practice.
Instincts are inborn complex patterns of behaviour that exist in most members of the species, and should be distinguished from reflexes, which are simple responses of an organism to a specific stimulus, such as the contraction of the pupil in response to bright light or the spasmodic movement of the lower leg when the knee is tapped. The absence of volitional capacity must not be confused with an inability to modify fixed action patterns. For example, people may be able to modify a stimulated fixed action pattern by consciously recognizing the point of its activation and simply stop doing it, whereas animals without a sufficiently strong volitional capacity may not be able to disengage from their fixed action patterns, once activated.
Instinctual behaviour in humans has been studied.
Early theorists
Jean Henri Fabre
Jean Henri Fabre (1823–1915) is said to be the first person to study small animals (that weren't birds) and insects, and he specifically specialized i
Document 3:::
Behavioral plasticity refers to a change in an organism's behavior that results from exposure to stimuli, such as changing environmental conditions. Behavior can change more rapidly in response to changes in internal or external stimuli than is the case for most morphological traits and many physiological traits. As a result, when organisms are confronted by new conditions, behavioral changes often occur in advance of physiological or morphological changes. For instance, larval amphibians changed their antipredator behavior within an hour after a change in cues from predators, but morphological changes in body and tail shape in response to the same cues required a week to complete.
Background
For many years, ethologists have studied the ways that behavior can change in response to changes in external stimuli or changes in the internal state of an organism. In a parallel literature, psychologists studying learning and cognition have spent years documenting the many ways that experiences in the past can affect the behavior an individual expresses at the current time. Interest in behavioral plasticity gained prominence more recently as an example of a type of phenotypic plasticity with major consequences for evolutionary biology.
Types
Behavioral plasticity can be broadly organized into two types: exogenous and endogenous. Exogenous plasticity refers to the changes in behavioral phenotype (i.e., observable behaviors) caused by an external stimulus, experience, or environment. Endogenous plasticity encompasses plastic responses that result from changes in internal cues, such as genotype, circadian rhythms, and menstruation.
These two broad categories can be further broken down into two other important classifications. When an external stimulus elicits or "activates" an immediate response (an immediate effect on behavior), then the organism is demonstrating contextual plasticity. This form of plasticity highlights the concept that external stimuli in a given context
Document 4:::
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machines; there is also evidence for some kind of learning in certain plants. Some learning is immediate, induced by a single event (e.g. being burned by a hot stove), but much skill and knowledge accumulate from repeated experiences. The changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be "lost" from that which cannot be retrieved.
Human learning starts at birth (it might even start before in terms of an embryo's need for both interaction with, and freedom within its environment within the womb.) and continues until death as a consequence of ongoing interactions between people and their environment. The nature and processes involved in learning are studied in many established fields (including educational psychology, neuropsychology, experimental psychology, cognitive sciences, and pedagogy), as well as emerging fields of knowledge (e.g. with a shared interest in the topic of learning from safety events such as incidents/accidents, or in collaborative learning health systems). Research in such fields has led to the identification of various sorts of learning. For example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. Learning may occur consciously or without conscious awareness. Learning that an aversive event cannot be avoided or escaped may result in a condition called learned helplessness. There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.
Play h
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for behaviors that are closely controlled by genes with little or no environmental influence?
A. learned behavior
B. innate behaviors
C. reflex behaviors
D. observational behaviors
Answer:
|
|
sciq-3791
|
multiple_choice
|
Organisms that "love" acids are known as what?
|
[
"acidic",
"acidophobes",
"acidophiles",
"acid heads"
] |
C
|
Relavent Documents:
Document 0:::
Acidophiles or acidophilic organisms are those that thrive under highly acidic conditions (usually at pH 5.0 or below). These organisms can be found in different branches of the tree of life, including Archaea, Bacteria, and Eukarya.
Examples
A list of these organisms includes:
Archaea
Sulfolobales, an order in the Thermoproteota branch of Archaea
Thermoplasmatales, an order in the Euryarchaeota branch of Archaea
ARMAN, in the Euryarchaeota branch of Archaea
Acidianus brierleyi, A. infernus, facultatively anaerobic thermoacidophilic archaebacteria
Halarchaeum acidiphilum, acidophilic member of the Halobacteriacaeae
Metallosphaera sedula, thermoacidophilic
Bacteria
Acidobacteriota, a phylum of Bacteria
Acidithiobacillales, an order of Pseudomonadota e.g. A. ferrooxidans, A. thiooxidans
Thiobacillus prosperus, T. acidophilus, T. organovorus, T. cuprinus
Acetobacter aceti, a bacterium that produces acetic acid (vinegar) from the oxidation of ethanol.
Alicyclobacillus, a genus of bacteria that can contaminate fruit juices.
Eukarya
Mucor racemosus
Urotricha
Dunaliella acidophila
Members of the algal class Cyanidiophyceae, including Cyanidioschyzon merolae
Mechanisms of adaptation to acidic environments
Most acidophile organisms have evolved extremely efficient mechanisms to pump protons out of the intracellular space in order to keep the cytoplasm at or near neutral pH. Therefore, intracellular proteins do not need to develop acid stability through evolution. However, other acidophiles, such as Acetobacter aceti, have an acidified cytoplasm which forces nearly all proteins in the genome to evolve acid stability. For this reason, Acetobacter aceti has become a valuable resource for understanding the mechanisms by which proteins can attain acid stability.
Studies of proteins adapted to low pH have revealed a few general mechanisms by which proteins can achieve acid stability. In most acid stable proteins (such as pepsin and the soxF protein from Sulfol
Document 1:::
An acidophobe is an organism that is intolerant of acidic environments. The terms acidophobia, acidophoby and acidophobic are also used. The term acidophobe is variously applied to plants, bacteria, protozoa, animals, chemical compounds, etc. The antonymous term is acidophile.
Plants are known to be well-defined with respect to their pH tolerance, and only a small number of species thrive well under a broad range of acidity. Therefore the categorization acidophile/acidophobe is well-defined. Sometimes a complementary classification is used (calcicole/calcifuge, with calcicoles being "lime-loving" plants). In gardening, soil pH is a measure of acidity or alkalinity of soil, with pH = 7 indicating the neutral soil. Therefore acydophobes would prefer pH above 7. Acid intolerance of plants may be mitigated by lime addition and by calcium and nitrogen fertilizers.
Acidophobic species are used as a natural instrument of monitoring the degree of acidifying contamination of soil and watercourses. For example, when monitoring vegetation, a decrease of acidophobic species would be indicative of acid rain increase in the area. A similar approach is used with aquatic species.
Acidophobes
Whiteworms (Enchytraeus albidus), a popular live food for aquarists, are acidophobes.
Acidophobic compounds are the ones which are unstable in acidic media.
Acidophobic crops: alfalfa, clover
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
The Investigative Biology Teaching Laboratories are located at Cornell University on the first floor Comstock Hall. They are well-equipped biology teaching laboratories used to provide hands-on laboratory experience to Cornell undergraduate students. Currently, they are the home of the Investigative Biology Laboratory Course, (BioG1500), and frequently being used by the Cornell Institute for Biology Teachers, the Disturbance Ecology course and Insectapalooza. In the past the Investigative Biology Teaching Laboratories hosted the laboratory portion of the Introductory Biology Course with the course number of Bio103-104 (renumbered to BioG1103-1104).
The Investigative Biology Teaching Laboratories house the Science Communication and Public Engagement Undergraduate Minor.
History
Bio103-104
BioG1103-1104 Biological Sciences Laboratory course was a two-semester, two-credit course. BioG1103 was offered in the spring, while 1104 was offered in the fall.
BioG1500
This course was first offered in Fall 2010. It is a one semester course, offered in the Fall, Spring and Summer for 2 credits. One credit is being awarded for the letter and one credit for the three-hour-long lab, following the SUNY system.
Document 4:::
A monogastric organism has a simple single-chambered stomach (one stomach). Examples of monogastric herbivores are horses and rabbits. Examples of monogastric omnivores include humans, pigs, hamsters and rats. Furthermore, there are monogastric carnivores such as cats. A monogastric organism is comparable to ruminant organisms (which has a four-chambered complex stomach), such as cattle, goats, or sheep. Herbivores with monogastric digestion can digest cellulose in their diets by way of symbiotic gut bacteria. However, their ability to extract energy from cellulose digestion is less efficient than in ruminants.
Herbivores digest cellulose by microbial fermentation. Monogastric herbivores which can digest cellulose nearly as well as ruminants are called hindgut fermenters, while ruminants are called foregut fermenters. These are subdivided into two groups based on the relative size of various digestive organs in relationship to the rest of the system: colonic fermenters tend to be larger species such as horses and rhinos, and cecal fermenters are smaller animals such as rabbits and rodents. Great apes derive significant amounts of phytanic acid from the hindgut fermentation of plant materials.
Monogastrics cannot digest the fiber molecule cellulose as efficiently as ruminants, though the ability to digest cellulose varies amongst species.
A monogastric digestive system works as soon as the food enters the mouth. Saliva moistens the food and begins the digestive process. (Note that horses have no (or negligible amounts of) amylase in their saliva). After being swallowed, the food passes from the esophagus into the stomach, where stomach acid and enzymes help to break down the food. Once food leaves the stomach and enters the small intestine, the pancreas secretes enzymes and alkali to neutralize the stomach acid.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Organisms that "love" acids are known as what?
A. acidic
B. acidophobes
C. acidophiles
D. acid heads
Answer:
|
|
ai2_arc-1093
|
multiple_choice
|
A lake has been used for more than a century to irrigate crops. How has this practice most likely affected this resource?
|
[
"It decreased the salt content of the water.",
"It increased the evaporation rate of the water.",
"It increased the number of fish in the lake.",
"It decreased the volume of the lake."
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself.
Lake ecosystem
Geography
The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments.
Eutrophication experiment
To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A lake has been used for more than a century to irrigate crops. How has this practice most likely affected this resource?
A. It decreased the salt content of the water.
B. It increased the evaporation rate of the water.
C. It increased the number of fish in the lake.
D. It decreased the volume of the lake.
Answer:
|
|
sciq-4285
|
multiple_choice
|
While in the lungs, blood gives up carbon dioxide and picks up what element before returning to the heart?
|
[
"oxygen",
"hydrogen",
"nitrogen",
"methane"
] |
A
|
Relavent Documents:
Document 0:::
In acid base physiology, the Davenport diagram is a graphical tool, developed by Horace W. Davenport, that allows a clinician or investigator to describe blood bicarbonate concentrations and blood pH following a respiratory and/or metabolic acid-base disturbance. The diagram depicts a three-dimensional surface describing all possible states of chemical equilibria between gaseous carbon dioxide, aqueous bicarbonate and aqueous protons at the physiologically complex interface of the alveoli of the lungs and the alveolar capillaries. Although the surface represented in the diagram is experimentally determined, the Davenport diagram is rarely used in the clinical setting, but allows the investigator to envision the effects of physiological changes on blood acid-base chemistry. For clinical use there are two recent innovations: an Acid-Base Diagram which provides Text Descriptions for the abnormalities and a High Altitude Version that provides text descriptions appropriate for the altitude.
Derivation
When a sample of blood is exposed to air, either in the alveoli of the lung or in an in vitro laboratory experiment, carbon dioxide in the air rapidly enters into equilibrium with carbon dioxide derivatives and other species in the aqueous solution. Figure 1 illustrates the most important equilibrium reactions of carbon dioxide in blood relating to acid-base physiology:
Note that in this equation, the HB/B- buffer system represents all non-bicarbonate buffers present in the blood, such as hemoglobin in its various protonated and deprotonated states. Because many different non-bicarbonate buffers are present in human blood, the final equilibrium state reached at any given pCO2 is highly complex and cannot be readily predicted using theory alone. By depicting experimental results, the Davenport diagram provides a simple approach to describing the behavior of this complex system.
Figure 2 shows a Davenport diagram as commonly depicted in textbooks and the literature. To un
Document 1:::
The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration.
The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate.
The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center.
Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group.
Dorsal respiratory group – in the medulla
Ventral respiratory group – in the medulla
Pneumotaxic center – various nuclei of the pons
Apneustic center – nucleus of the pons
From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs.
Control of respiratory rhythm
Ventilatory pattern
Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh
Document 2:::
Chloride shift (also known as the Hamburger phenomenon or lineas phenomenon, named after Hartog Jakob Hamburger) is a process which occurs in a cardiovascular system and refers to the exchange of bicarbonate (HCO3−) and chloride (Cl−) across the membrane of red blood cells (RBCs).
Mechanism
Carbon dioxide (CO2) is produced in tissues as a byproduct of normal metabolism. It dissolves in the solution of blood plasma and into red blood cells (RBC), where carbonic anhydrase catalyzes its hydration to carbonic acid (H2CO3). Carbonic acid then spontaneously dissociates to form bicarbonate Ions (HCO3−) and a hydrogen ion (H+). In response to the decrease in intracellular pCO2, more CO2 passively diffuses into the cell.
Cell membranes are generally impermeable to charged ions (i.e. H+, HCO3− ) but RBCs are able to exchange bicarbonate for chloride using the anion exchanger protein Band 3. Thus, the rise in intracellular bicarbonate leads to bicarbonate export and chloride intake. The term "chloride shift" refers to this exchange. Consequently, chloride concentration is lower in systemic venous blood than in systemic arterial blood: high venous pCO2 leads to bicarbonate production in RBCs, which then leaves the RBC in exchange for chloride coming in.
The opposite process occurs in the pulmonary capillaries of the lungs when the PO2 rises and PCO2 falls, and the Haldane effect occurs (release of CO2 from hemoglobin during oxygenation). This releases hydrogen ions from hemoglobin, increases free H+ concentration within RBCs, and shifts the equilibrium towards CO2 and water formation from bicarbonate. The subsequent decrease in intracellular bicarbonate concentration reverses chloride-bicarbonate exchange: bicarbonate moves into the cell in exchange for chloride moving out. Inward movement of bicarbonate via the Band 3 exchanger allows carbonic anhydrase to convert it to CO2 for expiration.
The chloride shift may also regulate the affinity of hemoglobin for oxygen through t
Document 3:::
When we sleep, our breathing changes due to normal biological processes that affect both our respiratory and muscular systems.
Physiology
Sleep Onset
Breathing changes as we transition from wakefulness to sleep. These changes arise due to biological changes in the processes that regulate our breathing. When we fall asleep, minute ventilation (the amount of air that we breathe per minute) reduces due to decreased metabolism.
Non-REM (NREM) Sleep
During NREM sleep, we move through three sleep stages, with each progressively deeper than the last. As our sleep deepens, our minute ventilation continues to decrease, reducing by 13% in the second NREM stage and by 15% in the third. For example, a study of 19 healthy adults revealed that the minute ventilation in NREM sleep was 7.18 liters/minute compared to 7.66 liters/minute when awake.
Ribcage & Abdominal Muscle Contributions
Rib cage contribution to ventilation increases during NREM sleep, mostly by lateral movement, and is detected by an increase in EMG amplitude during breathing. Diaphragm activity is little increased or unchanged and abdominal muscle activity is slightly increased during these sleep stages.
Upper Airway Resistance
Airway resistance increases by about 230% during NREM sleep. Elastic and flow resistive properties of the lung do not change during NREM sleep. The increase in resistance comes primarily from the upper airway in the retro-epiglottic region. Tonic activity of the pharyngeal dilator muscles of the upper airway decreases during the NREM sleep, contributing to the increased resistance, which is reflected in increased esophageal pressure swings during sleep. The other ventilatory muscles compensate for the increased resistance, and so the airflow decreases much less than the increase in resistance.
Arterial Blood Gases
The Arterial blood gasses pCO2 increases by 3-7mmHg, pO2 drops by 3-9mmHg and SaO2 drops by 2% or less. These changes occur despite a reduced metabolic rate, reflected by a
Document 4:::
Pulmonary pathology is the subspecialty of surgical pathology which deals with the diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery (VATS). The diagnosis of inflammatory or fibrotic diseases of the lungs is considered by many pathologists to be particularly challenging.
Anatomical pathology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
While in the lungs, blood gives up carbon dioxide and picks up what element before returning to the heart?
A. oxygen
B. hydrogen
C. nitrogen
D. methane
Answer:
|
|
sciq-10639
|
multiple_choice
|
What was the first widely used antibiotic?
|
[
"penicillin",
"alcohol",
"benadryl",
"aspirin"
] |
A
|
Relavent Documents:
Document 0:::
1972 – amoxicillin
1972 – cefradine
1972 – minocycline
1972 – pristinamycin
1973 – fosfomycin
1974 – talampicillin
1975 – tobramycin
1975 – bacampicillin
1975 – ticarcillin
1976 – amikacin
1977 – azlocillin
1977 – cefadroxil
1977 – cefamandole
1977 – cefoxitin
1977 – c
Document 1:::
This is the timeline of modern antimicrobial (anti-infective) therapy.
The years show when a given drug was released onto the pharmaceutical
market. This is not a timeline of the development of the antibiotics themselves.
Document 2:::
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the common cold or influenza; drugs which inhibit growth of viruses are termed antiviral drugs or antivirals rather than antibiotics. They are also not effective against fungi; drugs which inhibit growth of fungi are called antifungal drugs.
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed.
Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wa
Document 3:::
1911 – Arsphenamine a.k.a. Salvarsan
1912 – Neosalvarsan
1935 – Prontosil (an oral precursor to sulfanilamide), the first sulfonamide
1936 – Sulfanilamide
1938 – Sulfapyridine (M&B 693)
1939 – sulfacetamide
1940 – sulfamethizole
1942 – benzylpenicillin, the first penicillin
1942 – gramicidin S, the first peptide antibiotic
1942 – sulfadimidine
1943 – sulfamerazine
1944 – streptomycin, the first aminoglycoside
1947 – sulfadiazine
1948 – chlortetracycline, the first tetracycline
1949 – chloramphenicol, the first amphenicol
1949 – neomycin
1950 – oxytetracycline
1950 – penicillin G procaine
1952 – erythromycin, the first macrolide
1954 – benzathine penicillin
1955 – spiramycin
1955 – tetracycline
1955 – thiamphenicol
1955 – vancomycin, the first glycopeptide
1956 – phenoxymethylpenicillin
1958 – colistin, the first polymyxin
1958 – demeclocycline
1959 – virginiamycin
1960 – methicillin
1960 – metronidazole, the first nitroimidazole
1961 – ampicillin
1961 – spectinomycin
1961 – sulfamethoxazole
1961 – trimethoprim, the first dihydrofolate reductase inhibitor
1962 – oxacillin
1962 – cloxacillin
1962 – fusidic acid
1963 – fusafungine
1963 – lymecycline
1964 – gentamicin
1964 – cefalotin, the first cephalosporin
1966 – doxycycline
1967 – carbenicillin
1967 – rifampicin
1967 – nalidixic acid, the first quinolone
1968 – clindamycin, the second lincosamide
1970 – cefalexin
1971 – cefazolin
1971 – pivampicillin
1971 – tinidazole
1972 – amoxicillin
1972 – cefradine
1972 – minocycline
1972 – pristinamycin
1973 – fosfomycin
1974 – talampicillin
1975 – tobramycin
1975 – bacampicillin
Document 4:::
Production of antibiotics is a naturally occurring event, that thanks to advances in science can now be replicated and improved upon in laboratory settings. Due to the discovery of penicillin by Alexander Fleming, and the efforts of Florey and Chain in 1938, large-scale, pharmaceutical production of antibiotics has been made possible. As with the initial discovery of penicillin, most antibiotics have been discovered as a result of happenstance. Antibiotic production can be grouped into three methods: natural fermentation, semi-synthetic, and synthetic. As more and more bacteria continue to develop resistance to currently produced antibiotics, research and development of new antibiotics continues to be important. In addition to research and development into the production of new antibiotics, repackaging delivery systems is important to improving efficacy of the antibiotics that are currently produced. Improvements to this field have seen the ability to add antibiotics directly into implanted devices, aerosolization of antibiotics for direct delivery, and combination of antibiotics with non antibiotics to improve outcomes. The increase of antibiotic resistant strains of pathogenic bacteria has led to an increased urgency for the funding of research and development of antibiotics and a desire for production of new and better acting antibiotics.
Identifying useful antibiotics
Despite the wide variety of known antibiotics, less than 1% of antimicrobial agents have medical or commercial value. For example, whereas penicillin has a high therapeutic index as it does not generally affect human cells, this is not so for many antibiotics. Other antibiotics simply lack advantage over those already in use, or have no other practical applications.
Useful antibiotics are often discovered using a screening process. To conduct such a screen, isolates of many different microorganisms are cultured and then tested for production of diffusible products that inhibit the growth of t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the first widely used antibiotic?
A. penicillin
B. alcohol
C. benadryl
D. aspirin
Answer:
|
|
sciq-2149
|
multiple_choice
|
What do scientists believe are the oldest eukaryotes?
|
[
"prokaryotes",
"protists",
"arthropods",
"worms"
] |
B
|
Relavent Documents:
Document 0:::
The eukaryotes () constitute the domain of Eukarya, organisms whose cells have a membrane-bound nucleus. All animals, plants, fungi, and many unicellular organisms are eukaryotes. They constitute a major group of life forms alongside the two groups of prokaryotes: the Bacteria and the Archaea. Eukaryotes represent a small minority of the number of organisms, but due to their generally much larger size, their collective global biomass is much larger than that of prokaryotes.
The eukaryotes seemingly emerged in the Archaea, within the Asgard archaea. This implies that there are only two domains of life, Bacteria and Archaea, with eukaryotes incorporated among the Archaea. Eukaryotes emerged approximately 2.2 billion years ago, during the Proterozoic eon, likely as flagellated cells. The leading evolutionary theory is they were created by symbiogenesis between an anaerobic Asgard archaean and an aerobic proteobacterium, which formed the mitochondria. A second episode of symbiogenesis with a cyanobacterium created the plants, with chloroplasts. The oldest-known eukaryote fossils, multicellular planktonic organisms belonging to the Gabonionta, were discovered in Gabon in 2023, dating back to 2.1 billion years ago.
Eukaryotic cells contain membrane-bound organelles such as the nucleus, the endoplasmic reticulum, and the Golgi apparatus. Eukaryotes may be either unicellular or multicellular. In comparison, prokaryotes are typically unicellular. Unicellular eukaryotes are sometimes called protists. Eukaryotes can reproduce both asexually through mitosis and sexually through meiosis and gamete fusion (fertilization).
Diversity
Eukaryotes are organisms that range from microscopic single cells, such as picozoans under 3 micrometres across, to animals like the blue whale, weighing up to 190 tonnes and measuring up to long, or plants like the coast redwood, up to tall. Many eukaryotes are unicellular; the informal grouping called protists includes many of these, with some
Document 1:::
Eukaryogenesis, the process which created the eukaryotic cell and lineage, is a milestone in the evolution of life, since eukaryotes include all complex cells and almost all multicellular organisms. The process is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor (FECA). This cell had a new level of complexity and capability, with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose and peroxisomes. It evolved into a population of single-celled organisms that included the last eukaryotic common ancestor (LECA), gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms.
Context
Life arose on Earth once it had cooled enough for oceans to form. The last universal common ancestor (LUCA) was an organism which had ribosomes and the genetic code; it lived some 4 billion years ago. It gave rise to two main branches of prokaryotic life, the bacteria and the archaea. From among these small-celled, rapidly-dividing ancestors arose the Eukaryotes, with much larger cells, nuclei, and distinctive biochemistry. The eukaryotes form a domain that contains all complex cells and most types of multicellular organism, including the animals, plants, and fungi.
Symbiogenesis
According to the theory of symbiogenesis (also known as the endosymbiotic theory) championed by Lynn Margulis, a member of the archaea gained a bacterial cell as a component. The archaeal cell was a member of the Asgard group. The bacterium was one of the Alphaproteobacteria, which had the ability to use oxygen in its respiration. This enabled it – and the archaeal cells that
Document 2:::
Scientists trying to reconstruct evolutionary history have been challenged by the fact that genes can sometimes transfer between distant branches on the tree of life. This movement of genes can occur through horizontal gene transfer (HGT), scrambling the information on which biologists relied to reconstruct the phylogeny of organisms. Conversely, HGT can also help scientists to reconstruct and date the tree of life. Indeed, a gene transfer can be used as a phylogenetic marker, or as the proof of contemporaneity of the donor and recipient organisms, and as a trace of extinct biodiversity.
HGT happens very infrequently – at the individual organism level, it is highly improbable for any such event to take place. However, on the grander scale of evolutionary history, these events occur with some regularity. On one hand, this forces biologists to abandon the use of individual genes as good markers for the history of life. On the other hand, this provides an almost unexploited large source of information about the past.
Three domains of life
The three main early branches of the tree of life have been intensively studied by microbiologists because the first organisms were microorganisms. Microbiologists (led by Carl Woese) have introduced the term domain for the three main branches of this tree, where domain is a phylogenetic term similar in meaning to biological kingdom. To reconstruct this tree of life, the gene sequence encoding the small subunit of ribosomal RNA (SSU rRNA, 16s rRNA) has proven useful, and the tree (as shown in the picture) relies heavily on information from this single gene.
These three domains of life represent the main evolutionary lineages of early cellular life and currently include Bacteria, Archaea (single-celled organisms superficially similar to bacteria), and Eukarya. Eukarya includes only organisms having a well-defined nucleus, such as fungi, protists, and all organisms in the plant and animals kingdoms (see figure).
The gene most com
Document 3:::
The eocyte hypothesis in evolutionary biology proposes that the eukaryotes originated from a group of prokaryotes called eocytes (later classified as Thermoproteota, a group of archaea). After his team at the University of California, Los Angeles discovered eocytes in 1984, James A. Lake formulated the hypothesis as "eocyte tree" that proposed eukaryotes as part of archaea. Lake hypothesised the tree of life as having only two primary branches: Parkaryoates that include Bacteria and Archaea, and karyotes that comprise Eukaryotes and eocytes. Parts of this early hypothesis were revived in a newer two-domain system of biological classification which named the primary domains as Archaea and Bacteria.
Lake's hypothesis was based on an analysis of the structural components of ribosomes. It was largely ignored, being overshadowed by the three-domain system which relied on more precise genetic analysis. In 1990, Carl Woese and his colleagues proposed that cellular life consists of three domains – Eucarya, Bacteria, and Archaea – based on the ribosomal RNA sequences. The three-domain concept was widely accepted in genetics, and became the presumptive classification system for high-level taxonomy, and was promulgated in many textbooks.
Resurgence of archaea research after the 2000s, using advanced genetic techniques, and later discoveries of new groups of archaea revived the eocyte hypothesis; consequently, the two-domain system has found wider acceptance.
Description
In 1984, James A. Lake, Michael W. Clark, Eric Henderson, and Melanie Oakes of the University of California, Los Angeles described a new group of prokaryotic organisms designated as "a group of sulfur-dependent bacteria." Based on the structure and composition of their ribosomal subunits, they found that these organisms were different from other prokaryotes, bacteria and archaea, known at the time. They named them eocytes (for "dawn cells") and proposed a new biological kingdom Eocyta. According to this disc
Document 4:::
The smallest organisms found on Earth can be determined according to various aspects of organism size, including volume, mass, height, length, or genome size.
Given the incomplete nature of scientific knowledge, it is possible that the smallest organism is undiscovered. Furthermore, there is some debate over the definition of life, and what entities qualify as organisms; consequently the smallest known organism (microorganism) is debatable.
Microorganisms
Obligate endosymbiotic bacteria
The genome of Nasuia deltocephalinicola, a symbiont of the European pest leafhopper, Macrosteles quadripunctulatus, consists of a circular chromosome of 112,031 base pairs.
The genome of Nanoarchaeum equitans is 491 Kbp nucleotides long.
Pelagibacter ubique
Pelagibacter ubique is one of the smallest known free-living bacteria, with a length of and an average cell diameter of . They also have the smallest free-living bacterium genome: 1.3 Mbp, 1354 protein genes, 35 RNA genes. They are one of the most common and smallest organisms in the ocean, with their total weight exceeding that of all fish in the sea.
Mycoplasma genitalium
Mycoplasma genitalium, a parasitic bacterium which lives in the primate bladder, waste disposal organs, genital, and respiratory tracts, is thought to be the smallest known organism capable of independent growth and reproduction. With a size of approximately 200 to 300 nm, M. genitalium is an ultramicrobacterium, smaller than other small bacteria, including rickettsia and chlamydia. However, the vast majority of bacterial strains have not been studied, and the marine ultramicrobacterium Sphingomonas sp. strain RB2256 is reported to have passed through a ultrafilter. A complicating factor is nutrient-downsized bacteria, bacteria that become much smaller due to a lack of available nutrients.
Nanoarchaeum
Nanoarchaeum equitans is a species of microbe in diameter. It was discovered in 2002 in a hydrothermal vent off the coast of Iceland by Karl Stet
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do scientists believe are the oldest eukaryotes?
A. prokaryotes
B. protists
C. arthropods
D. worms
Answer:
|
|
sciq-11015
|
multiple_choice
|
Which two states of matter have definite volumes?
|
[
"bacteria and liquids",
"GAS AND LIQUIDS",
"contrasts and liquids",
"solids and liquids"
] |
D
|
Relavent Documents:
Document 0:::
In chemistry and related fields, the molar volume, symbol Vm, or of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar mass (M) divided by the mass density (ρ):
The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids.
Definition
The molar volume of a substance i is defined as its molar mass divided by its density ρi0:
For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density:
There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property.
Relation to specific volume
Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance:
Ideal gases
For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure.
The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas:
Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: R = , or about .
The molar volume of an ideal gas at 100 kPa (1 bar) is
at 0 °C,
at 25 °C.
The molar volume of an ideal gas at 1 atmosphere of pressure is
at 0 °C,
at 25 °C.
Crystalline solids
For crystalline solids, the molar volume can be measured by X-ray crystallography.
The unit cell
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape.
The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids.
A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container.
Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars).
Introduction
Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid.
A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a containe
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
Volume solid is the volume of paint after it has dried. This is different than the weight solid. Paint may contain solvent, resin, pigments, and additives. Many paints do not contain any solvent. After applying the paint, the solid portion will be left on the substrate. Volume solid is the term that indicates the solid proportion of the paint on a volume basis. For example, if the paint is applied in a wet film at a 100 μm thickness and the volume solid of paint is 50%, then the dry film thickness (DFT) will be 50 μm as 50% of the wet paint has evaporated. Suppose the volume solid is 100%, and the wet film thickness is also 100 μm. Then after complete drying of the paint, the DFT will be 100 μm because no solvent will be evaporated.
This is an important concept when using paint industrially to calculate the cost of painting. It can be said that it is the real volume of paint.
Here is the formula by which one can calculate the volume solid of paint,
(Total sum by volume of each solid ingredient in paint x 100%)/ Total sum by volume of each ingredient in paint.
A simple method that anyone can do to determine volume solids empirically is to apply paint to a steel surface with an application knife and measure the wet film thickness. Then cure the paint and measure the dry film thickness. The percentage of dry to wet represents the percentage of volume solids.
In earlier days, the volume solid was measured by a disc method but now a sophisticated instrument is also available which takes only a drop of paint to check the volume solid.
Understanding 'volume solids' allows knowing the true cost of different coatings and how much paint is used to perform its function. Generally, more expensive paints have a higher volume of solids and provide better coverage.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which two states of matter have definite volumes?
A. bacteria and liquids
B. GAS AND LIQUIDS
C. contrasts and liquids
D. solids and liquids
Answer:
|
|
sciq-9829
|
multiple_choice
|
What expresses the mass of substance in terms of the volume occupied by the substance?
|
[
"velocity",
"diameter",
"frequency",
"density"
] |
D
|
Relavent Documents:
Document 0:::
In chemistry and related fields, the molar volume, symbol Vm, or of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar mass (M) divided by the mass density (ρ):
The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids.
Definition
The molar volume of a substance i is defined as its molar mass divided by its density ρi0:
For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density:
There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property.
Relation to specific volume
Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance:
Ideal gases
For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure.
The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas:
Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: R = , or about .
The molar volume of an ideal gas at 100 kPa (1 bar) is
at 0 °C,
at 25 °C.
The molar volume of an ideal gas at 1 atmosphere of pressure is
at 0 °C,
at 25 °C.
Crystalline solids
For crystalline solids, the molar volume can be measured by X-ray crystallography.
The unit cell
Document 1:::
This article gives a list of conversion factors for several physical quantities. A number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit.
Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit is commonly known by another name (for example, 1 micron = 10−6 metre). Within each table, the units are listed alphabetically, and the SI units (base or derived) are highlighted.
The following quantities are considered: length, area, volume, plane angle, solid angle, mass, density, time, frequency, velocity, volumetric flow rate, acceleration, force, pressure (or mechanical stress), torque (or moment of force), energy, power (or heat flow rate), action, dynamic viscosity, kinematic viscosity, electric current, electric charge, electric dipole, electromotive force (or electric potential difference), electrical resistance, capacitance, magnetic flux, magnetic flux density, inductance, temperature, information entropy, luminous intensity, luminance, luminous flux, illuminance, radiation.
Length
Area
Volume
Plane angle
Solid angle
Mass
Notes:
See Weight for detail of mass/weight distinction and conversion.
Avoirdupois is a system of mass based on a pound of 16 ounces, while Troy weight is the system of mass where 12 troy ounces equals one troy pound.
The symbol is used to denote standard gravity in order to avoid confusion with the (upright) g symbol for gram.
Density
Time
Frequency
Speed or velocity
A velocity consists of a speed combined with a direction; the speed part of the velocity takes units of speed.
Flow (volume)
Acceleration
Force
Pressure or mechanical stress
Torque or moment of force
Energy
Power or heat flow rate
Action
Dynamic viscosity
Kinematic viscosity
Electric current
Electric charge
Electric dipole
Elec
Document 2:::
In physics and mechanics, mass distribution is the spatial distribution of mass within a solid body. In principle, it is relevant also for gases or liquids, but on Earth their mass distribution is almost homogeneous.
Astronomy
In astronomy mass distribution has decisive influence on the development e.g. of nebulae, stars and planets.
The mass distribution of a solid defines its center of gravity and influences its dynamical behaviour - e.g. the oscillations and eventual rotation.
Mathematical modelling
A mass distribution can be modeled as a measure. This allows point masses, line masses, surface masses, as well as masses given by a volume density function. Alternatively the latter can be generalized to a distribution. For example, a point mass is represented by a delta function defined in 3-dimensional space. A surface mass on a surface given by the equation may be represented by a density distribution , where is the mass per unit area.
The mathematical modelling can be done by potential theory, by numerical methods (e.g. a great number of mass points), or by theoretical equilibrium figures.
Geology
In geology the aspects of rock density are involved.
Rotating solids
Rotating solids are affected considerably by the mass distribution, either if they are homogeneous or inhomogeneous - see Torque, moment of inertia, wobble, imbalance and stability.
See also
Bouguer plate
Gravity
Mass function
Mass concentration (astronomy)
External links
Mass distribution of the Earth
Mechanics
Celestial mechanics
Geophysics
Mass
Document 3:::
In physics and engineering, mass flow rate is the mass of a substance which passes per unit of time. Its unit is kilogram per second in SI units, and slug per second or pound per second in US customary units. The common symbol is (ṁ, pronounced "m-dot"), although sometimes μ (Greek lowercase mu) is used.
Sometimes, mass flow rate is termed mass flux or mass current, see for example Schaum's Outline of Fluid Mechanics. In this article, the (more intuitive) definition is used.
Mass flow rate is defined by the limit:
i.e., the flow of mass through a surface per unit time .
The overdot on the is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow.
Alternative equations
Mass flow rate can also be calculated by
where
The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral:
The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface, e.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, A, and a unit vector normal to the area, . The relation is .
The reason for the dot product is as follows. The only mass flowing through the cross-section
Document 4:::
A proof mass or test mass is a known quantity of mass used in a measuring instrument as a reference for the measurement of an unknown quantity.
A mass used to calibrate a weighing scale is sometimes called a calibration mass or calibration weight.
A proof mass that deforms a spring in an accelerometer is sometimes called the seismic mass. In a convective accelerometer, a fluid proof mass may be employed.
See also
Calibration, checking or adjustment by comparison with a standard
Control variable, the experimental element that is constant and unchanged throughout the course of a scientific investigation
Test particle, an idealized model of an object in which all physical properties are assumed to be negligible, except for the property being studied
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What expresses the mass of substance in terms of the volume occupied by the substance?
A. velocity
B. diameter
C. frequency
D. density
Answer:
|
|
sciq-8946
|
multiple_choice
|
What term refers to a wall or partition that divides the heart into chambers?
|
[
"septum",
"cysts",
"cartilage",
"valve"
] |
A
|
Relavent Documents:
Document 0:::
The right border of the heart (right margin of heart) is a long border on the surface of the heart, and is formed by the right atrium.
The atrial portion is rounded and almost vertical; it is situated behind the third, fourth, and fifth right costal cartilages about 1.25 cm. from the margin of the sternum.
The ventricular portion, thin and sharp, is named the acute margin; it is nearly horizontal, and extends from the sternal end of the sixth right coastal cartilage to the apex of the heart.
Document 1:::
A heart valve is a one-way valve that allows blood to flow in one direction through the chambers of the heart. Four valves are usually present in a mammalian heart and together they determine the pathway of blood flow through the heart. A heart valve opens or closes according to differential blood pressure on each side.
The four valves in the mammalian heart are two atrioventricular valves separating the upper atria from the lower ventricles – the mitral valve in the left heart, and the tricuspid valve in the right heart. The other two valves are at the entrance to the arteries leaving the heart these are the semilunar valves – the aortic valve at the aorta, and the pulmonary valve at the pulmonary artery.
The heart also has a coronary sinus valve and an inferior vena cava valve, not discussed here.
Structure
The heart valves and the chambers are lined with endocardium. Heart valves separate the atria from the ventricles, or the ventricles from a blood vessel. Heart valves are situated around the fibrous rings of the cardiac skeleton. The valves incorporate flaps called leaflets or cusps, similar to a duckbill valve or flutter valve, which are pushed open to allow blood flow and which then close together to seal and prevent backflow. The mitral valve has two cusps, whereas the others have three. There are nodules at the tips of the cusps that make the seal tighter.
The pulmonary valve has left, right, and anterior cusps. The aortic valve has left, right, and posterior cusps. The tricuspid valve has anterior, posterior, and septal cusps; and the mitral valve has just anterior and posterior cusps.
The valves of the human heart can be grouped in two sets:
Two atrioventricular valves to prevent backflow of blood from the ventricles into the atria:
Tricuspid valve or right atrioventricular valve, between the right atrium and right ventricle
Mitral valve or bicuspid valve, between the left atrium and left ventricle
Two semilunar valves to prevent the backflow o
Document 2:::
Cavities
In the 4th week the coelom divides into pericardial, pleural and peritoneal cavities.
First partition: is the septum transversum.
This will be translocated later into the diaphragm and ventral mesentery.
Divides the coelom into primitive pericardial and peritoneal cavities
Pleuroperic
Document 3:::
The left border of heart (or obtuse margin) is formed from the rounded lateral wall of the left ventricle. It is called the 'obtuse' margin because of the obtuse angle (>90 degrees) created between the anterior part of the heart and the left side, which is formed from the rounded lateral wall of the left ventricle. Within this margin can be found the obtuse marginal artery, which is the a branch of the left circumflex artery.
It extends from a point in the second left intercostal space, about 2.5 mm. from the sternal margin, obliquely downward, with a convexity to the left, to the apex of the heart.
This is contrasted with the acute margin of the heart, which is at the border of the anterior and posterior surface, and in which the acute marginal branch of the right coronary artery is found. The angle formed here is <90 degrees, therefore an acute angle.
Document 4:::
Endocardial cushions, or atrioventricular cushions, refer to a subset of cells in the development of the heart that play a vital role in the proper formation of the heart septa.
They develop on the atrioventricular canal and conotruncal region of the bulbus cordis.
During heart development, the heart starts out as a tube. As heart development continues, this tube undergoes remodeling to eventually form the four-chambered heart. The endocardial cushions are a subset of cells found in the developing heart tube that will give rise to the heart's primitive valves and septa, critical to the proper formation of a four-chambered heart.
Development
The endocardial cushions are thought to arise from a subset of endothelial cells that undergo epithelial-mesenchymal transition, a process whereby these cells break cell-to-cell contacts and migrate into the cardiac jelly (towards the interior of the heart tube). These migrated cells form the "swellings" called the endocardial cushions seen in the heart tube.
Upon sectioning of the heart the atrioventricular endocardial cushions can be observed in the lumen of the atrial canal as two thickenings, one on its dorsal and another on its ventral wall. These thickenings will go on to fuse and remodel to eventually form the valves and septa of the mature adult heart.
Clinical significance
A problem in endocardial cushion development or remodeling is thought to be associated with atrioventricular septal defect.
See also
Endocardial tubes
Heart development
Mesenchyme
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term refers to a wall or partition that divides the heart into chambers?
A. septum
B. cysts
C. cartilage
D. valve
Answer:
|
|
sciq-8793
|
multiple_choice
|
What is the separation of compounds on the basis of their solubilities in a given solvent?
|
[
"fractional mass",
"solvent law",
"volatile crystallization",
"fractional crystallization"
] |
D
|
Relavent Documents:
Document 0:::
Liquid–liquid extraction (LLE), also known as solvent extraction and partitioning, is a method to separate compounds or metal complexes, based on their relative solubilities in two different immiscible liquids, usually water (polar) and an organic solvent (non-polar). There is a net transfer of one or more species from one liquid into another liquid phase, generally from aqueous to organic. The transfer is driven by chemical potential, i.e. once the transfer is complete, the overall system of chemical components that make up the solutes and the solvents are in a more stable configuration (lower free energy). The solvent that is enriched in solute(s) is called extract. The feed solution that is depleted in solute(s) is called the raffinate. LLE is a basic technique in chemical laboratories, where it is performed using a variety of apparatus, from separatory funnels to countercurrent distribution equipment called as mixer settlers. This type of process is commonly performed after a chemical reaction as part of the work-up, often including an acidic work-up.
The term partitioning is commonly used to refer to the underlying chemical and physical processes involved in liquid–liquid extraction, but on another reading may be fully synonymous with it. The term solvent extraction can also refer to the separation of a substance from a mixture by preferentially dissolving that substance in a suitable solvent. In that case, a soluble compound is separated from an insoluble compound or a complex matrix.
From a hydrometallurgical perspective, solvent extraction is exclusively used in separation and purification of uranium and plutonium, zirconium and hafnium, separation of cobalt and nickel, separation and purification of rare earth elements etc., its greatest advantage being its ability to selectively separate out even very similar metals. One obtains high-purity single metal streams on 'stripping' out the metal value from the 'loaded' organic wherein one can precipitate or de
Document 1:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 2:::
In chemistry, fractional crystallization is a method of refining substances based on differences in their solubility. It fractionates via differences in crystallization (forming of crystals). If a mixture of two or more substances in solution are allowed to crystallize, for example by allowing the temperature of the solution to decrease or increase, the precipitate will contain more of the least soluble substance. The proportion of components in the precipitate will depend on their solubility products. If the solubility products are very similar, a cascade process will be needed to effectuate a complete separation.
This technique is often used in chemical engineering to obtain pure substances, or to recover saleable products from waste solutions.
Fractional crystallization can be used to separate solid-solid mixtures. An example of this is separating KNO3 and KClO3.
See also
Cold Water Extraction
Fractional crystallization (geology)
Fractional freezing
Laser-heated pedestal growth
Pumpable ice technology
Recrystallization (chemistry)
Seed crystal
Single crystal
Document 3:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 4:::
In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
History
Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Occurrence and examples
Solid precipitate, liquid solvent
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases wit
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the separation of compounds on the basis of their solubilities in a given solvent?
A. fractional mass
B. solvent law
C. volatile crystallization
D. fractional crystallization
Answer:
|
|
ai2_arc-943
|
multiple_choice
|
A strong magnet will separate a mixture of
|
[
"clear glass and green glass.",
"paper cups and plastic cups.",
"iron nails and aluminum nails.",
"sand and salt."
] |
C
|
Relavent Documents:
Document 0:::
In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids.
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them).
Characteristics of mixtures
All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways:
the substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation.
there is little or no energy change when a mixture forms (see Enthalpy of mixing).
The substances in a mixture keep its separate properties.
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
mixtures have variable compositions, while compounds have a fixed, definite formula.
when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A strong magnet will separate a mixture of
A. clear glass and green glass.
B. paper cups and plastic cups.
C. iron nails and aluminum nails.
D. sand and salt.
Answer:
|
|
scienceQA-971
|
multiple_choice
|
What do these two changes have in common?
boiling an egg
acid rain weathering a marble statue
|
[
"Both are only physical changes.",
"Both are caused by heating.",
"Both are chemical changes.",
"Both are caused by cooling."
] |
C
|
Step 1: Think about each change.
Boiling an egg is a chemical change. The heat causes the matter in the egg to change. Cooked eggs and raw eggs are made of different types of matter.
Acid rain weathering a marble statue is a chemical change. The acid rain reacts with the outside of the statue and breaks it down into a different type of matter. This new matter is then washed away by the rain. Acid rain is a type of pollution. It forms when smoke from automobiles and factories mixes with water in clouds.
Acid rain is a type of pollution. It forms when automobiles and factories release smoke containing sulfur or nitrogen. Some of these chemicals react with water in the atmosphere. The reaction forms droplets of water that can fall back to the ground as acid rain.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Cooking is caused by heating. But acid rain weathering a marble statue is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
Document 3:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 4:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
boiling an egg
acid rain weathering a marble statue
A. Both are only physical changes.
B. Both are caused by heating.
C. Both are chemical changes.
D. Both are caused by cooling.
Answer:
|
sciq-5431
|
multiple_choice
|
What are one of the most abundant organic molecules in living systems and have the most diverse range of functions of all macromolecules?
|
[
"cells",
"carbons",
"proteins",
"acids"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 4:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are one of the most abundant organic molecules in living systems and have the most diverse range of functions of all macromolecules?
A. cells
B. carbons
C. proteins
D. acids
Answer:
|
|
sciq-555
|
multiple_choice
|
Where do germline mutations occur in?
|
[
"in gametes",
"in aggregations",
"in spores",
"In blood cells"
] |
A
|
Relavent Documents:
Document 0:::
A germline mutation, or germinal mutation, is any detectable variation within germ cells (cells that, when fully developed, become sperm and ova). Mutations in these cells are the only mutations that can be passed on to offspring, when either a mutated sperm or oocyte come together to form a zygote. After this fertilization event occurs, germ cells divide rapidly to produce all of the cells in the body, causing this mutation to be present in every somatic and germline cell in the offspring; this is also known as a constitutional mutation. Germline mutation is distinct from somatic mutation.
Germline mutations can be caused by a variety of endogenous (internal) and exogenous (external) factors, and can occur throughout zygote development. A mutation that arises only in germ cells can result in offspring with a genetic condition that is not present in either parent; this is because the mutation is not present in the rest of the parents' body, only the germline.
When mutagenesis occurs
Germline mutations can occur before fertilization and during various stages of zygote development. When the mutation arises will determine the effect it has on offspring. If the mutation arises in either the sperm or the oocyte before development, then the mutation will be present in every cell in the individual's body. A mutation that arises soon after fertilization, but before germline and somatic cells are determined, then the mutation will be present in a large proportion of the individual's cell with no bias towards germline or somatic cells, this is also called a gonosomal mutation. A mutation that arises later in zygote development will be present in a small subset of either somatic or germline cells, but not both.
Causes
Endogenous factors
A germline mutation often arises due to endogenous factors, like errors in cellular replication and oxidative damage. This damage is rarely repaired imperfectly, but due to the high rate of germ cell division, can occur frequently.
Endog
Document 1:::
In mathematics, the notion of a germ of an object in/on a topological space is an equivalence class of that object and others of the same kind that captures their shared local properties. In particular, the objects in question are mostly functions (or maps) and subsets. In specific implementations of this idea, the functions or subsets in question will have some property, such as being analytic or smooth, but in general this is not needed (the functions in question need not even be continuous); it is however necessary that the space on/in which the object is defined is a topological space, in order that the word local has some meaning.
Name
The name is derived from cereal germ in a continuation of the sheaf metaphor, as a germ is (locally) the "heart" of a function, as it is for a grain.
Formal definition
Basic definition
Given a point x of a topological space X, and two maps (where Y is any set), then and define the same germ at x if there is a neighbourhood U of x such that restricted to U, f and g are equal; meaning that for all u in U.
Similarly, if S and T are any two subsets of X, then they define the same germ at x if there is again a neighbourhood U of x such that
It is straightforward to see that defining the same germ at x is an equivalence relation (be it on maps or sets), and the equivalence classes are called germs (map-germs, or set-germs accordingly). The equivalence relation is usually written
Given a map f on X, then its germ at x is usually denoted [f ]x. Similarly, the germ at x of a set S is written [S]x. Thus,
A map germ at x in X that maps the point x in X to the point y in Y is denoted as
When using this notation, f is then intended as an entire equivalence class of maps, using the same letter f for any representative map.
Notice that two sets are germ-equivalent at x if and only if their characteristic functions are germ-equivalent at x:
More generally
Maps need not be defined on all of X, and in particular they don't need to
Document 2:::
A somatic mutation is a change in the DNA sequence of a somatic cell of a multicellular organism with dedicated reproductive cells; that is, any mutation that occurs in a cell other than a gamete, germ cell, or gametocyte. Unlike germline mutations, which can be passed on to the descendants of an organism, somatic mutations are not usually transmitted to descendants. This distinction is blurred in plants, which lack a dedicated germline, and in those animals that can reproduce asexually through mechanisms such as budding, as in members of the cnidarian genus Hydra.
While somatic mutations are not passed down to an organism's offspring, somatic mutations will be present in all descendants of a cell within the same organism. Many cancers are the result of accumulated somatic mutations.
Fraction of cells affected
The term somatic generally refers to the cells of the body, in contrast to the reproductive (germline) cells, which give rise to the egg or sperm. For example, in mammals, somatic cells make up the internal organs, skin, bones, blood, and connective tissue.
In most animals, separation of germ cells from somatic cells (germline development) occurs during early stages of development. Once this segregation has occurred in the embryo, any mutation outside of the germline cells can not be passed down to an organism's offspring.
However, somatic mutations are passed down to all the progeny of a mutated cell within the same organism. A major section of an organism therefore might carry the same mutation, especially if that mutation occurs at earlier stages of development. Somatic mutations that occur later in an organism's life can be hard to detect, as they may affect only a single cell - for instance, a post-mitotic neuron; improvements in single cell sequencing are therefore an important tool for the study of somatic mutation. Both the nuclear DNA and mitochondrial DNA of a cell can accumulate mutations; somatic mitochondrial mutations have been implicated i
Document 3:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
Document 4:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do germline mutations occur in?
A. in gametes
B. in aggregations
C. in spores
D. In blood cells
Answer:
|
|
sciq-3004
|
multiple_choice
|
What type of cells have chloroplasts?
|
[
"animal cells",
"plant cells",
"human cells",
"simple cells"
] |
B
|
Relavent Documents:
Document 0:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 3:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 4:::
Organelle biogenesis is the biogenesis, or creation, of cellular organelles in cells. Organelle biogenesis includes the process by which cellular organelles are split between daughter cells during mitosis; this process is called organelle inheritance.
Discovery
Following the discovery of cellular organelles in the nineteenth century, little was known about their function and synthesis until the development of electron microscopy and subcellular fractionation in the twentieth century. This allowed experiments on the function, structure, and biogenesis of these organelles to commence.
Mechanisms of protein sorting and retrieval have been found to give organelles their characteristic composition. It is known that cellular organelles can come from preexisting organelles; however, it is a subject of controversy whether organelles can be created without a preexisting one.
Process
Several processes are known to have developed for organelle biogenesis. These can range from de novo synthesis to the copying of a template organelle; the formation of an organelle 'from scratch' and using a preexisting organelle as a template to manufacture an organelle, respectively. The distinct structures of each organelle are thought to be caused by the different mechanisms of the processes which create them and the proteins that they are made up of. Organelles may also be 'split' between two cells during the process of cellular division (known as organelle inheritance), where the organelle of the parent cell doubles in size and then splits with each half being delivered to their respective daughter cells.
The process of organelle biogenesis is known to be regulated by specialized transcription networks that modulate the expression of the genes that code for specific organellar proteins. In order for organelle biogenesis to be carried out properly, the specific genes coding for the organellar proteins must be transcribed properly and the translation of the resulting mRNA must be succes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of cells have chloroplasts?
A. animal cells
B. plant cells
C. human cells
D. simple cells
Answer:
|
|
sciq-10804
|
multiple_choice
|
An obstacle or opening that is shorter than the wavelength causes greater diffraction of what?
|
[
"ranges",
"waves",
"particles",
"tides"
] |
B
|
Relavent Documents:
Document 0:::
Diffraction processes affecting waves are amenable to quantitative description and analysis. Such treatments are applied to a wave passing through one or more slits whose width is specified as a proportion of the wavelength. Numerical approximations may be used, including the Fresnel and Fraunhofer approximations.
General diffraction
Because diffraction is the result of addition of all waves (of given wavelength) along all unobstructed paths, the usual procedure is to consider the contribution of an infinitesimally small neighborhood around a certain path (this contribution is usually called a wavelet) and then integrate over all paths (= add all wavelets) from the source to the detector (or given point on a screen).
Thus in order to determine the pattern produced by diffraction, the phase and the amplitude of each of the wavelets is calculated. That is, at each point in space we must determine the distance to each of the simple sources on the incoming wavefront. If the distance to each of the simple sources differs by an integer number of wavelengths, all the wavelets will be in phase, resulting in constructive interference. If the distance to each source is an integer plus one half of a wavelength, there will be complete destructive interference. Usually, it is sufficient to determine these minima and maxima to explain the observed diffraction effects.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case, as water waves propagate only on the surface of the water. For light, we can often neglect one dimension if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes we will have to take into account the full three-dimensional nature of the problem.
Several qualitative observations can be made of diffraction in general:
The angular spacing of the featu
Document 1:::
Quasioptics concerns the propagation of electromagnetic radiation where the wavelength is comparable to the size of the optical components (e.g. lenses, mirrors, and apertures) and hence diffraction effects may become significant. It commonly describes the propagation of Gaussian beams where the beam width is comparable to the wavelength. This is in contrast to geometrical optics, where the wavelength is small compared to the relevant length scales. Quasioptics is so named because it represents an intermediate regime between conventional optics and electronics, and is often relevant to the description of signals in the far-infrared or terahertz region of the electromagnetic spectrum. It represents a simplified version of the more rigorous treatment of physical optics. Quasi-optical systems may also operate at lower frequencies such as millimeter wave, microwave, and even lower.
See also
Optoelectronics
Document 2:::
In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Ray tracing solves the problem by repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analysis can be performed by using a computer to propagate many rays.
When applied to problems of electromagnetic radiation, ray tracing often relies on approximate solutions to Maxwell's equations that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray theory does not describe phenomena such as interference and diffraction, which require wave theory (involving the phase of the wave).
Technique
Ray tracing works by assuming that the particle or wave can be modeled as a large number of very narrow beams (rays), and that there exists some distance, possibly very small, over which such a ray is locally straight. The ray tracer will advance the ray over this distance, and then use a local derivative of the medium to calculate the ray's new direction. From this location, a new ray is sent out and the process is repeated until a complete path is generated. If the simulation includes solid objects, the ray may be tested for intersection with them at each step, making adjustments to the ray's direction if a collision is found. Other properties of the ray may be altered as the simulation advances as well, such as intensity, wavelength, or polarization. This process is repeated with as many rays as are necessary to understand the behavior of the system.
Uses
Astronomy
Ray tracing is being increasingly used in astronomy to simulate realistic images of
Document 3:::
Rayleigh distance in optics is the axial distance from a radiating aperture to a point at which the path difference between the axial ray and an edge ray is λ / 4.
An approximation of the Rayleigh Distance is , in which Z is the Rayleigh distance, D is the aperture of radiation, λ the wavelength.
This approximation can be derived as follows. Consider a right angled triangle with sides adjacent , opposite and hypotenuse . According to Pythagorean theorem,
.
Rearranging, and simplifying
The constant term can be neglected.
In antenna applications, the Rayleigh distance is often given as four times this value, i.e. which corresponds to the border between the Fresnel and Fraunhofer regions and denotes the distance at which the beam radiated by a reflector antenna is fully formed (although sometimes the Rayleigh distance it is still given as per the optical convention e.g.).
The Rayleigh distance is also the distance beyond which the distribution of the diffracted light energy no longer changes according to the distance Z from the aperture.
It is the reduced Fraunhofer diffraction limitation.
Lord Rayleigh's paper on the subject was published in 1891.
Optical quantities
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An obstacle or opening that is shorter than the wavelength causes greater diffraction of what?
A. ranges
B. waves
C. particles
D. tides
Answer:
|
|
scienceQA-1697
|
multiple_choice
|
Select the chemical formula for this molecule.
|
[
"HCl",
"HClN",
"HC",
"H2Cl"
] |
A
|
H is the symbol for hydrogen. Cl is the symbol for chlorine. This ball-and-stick model shows a molecule with one hydrogen atom and one chlorine atom.
The chemical formula will contain the symbols H and Cl. There is one hydrogen atom, so H will not have a subscript. There is one chlorine atom, so Cl will not have a subscript.
The correct formula is HCl.
The diagram below shows how each part of the chemical formula matches with each part of the model above.
|
Relavent Documents:
Document 0:::
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma
Document 1:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 2:::
Chlorthiamide is an organic compound with the chemical formula C7H5Cl2NS used as an herbicide.
Chloroarenes
Herbicides
Thioamides
Document 3:::
E–Z configuration, or the E–Z convention, is the IUPAC preferred method of describing the absolute stereochemistry of double bonds in organic chemistry. It is an extension of cis–trans isomer notation (which only describes relative stereochemistry) that can be used to describe double bonds having two, three or four substituents.
Following the Cahn–Ingold–Prelog priority rules (CIP rules), each substituent on a double bond is assigned a priority, then positions of the higher of the two substituents on each carbon are compared to each other. If the two groups of higher priority are on opposite sides of the double bond (trans to each other), the bond is assigned the configuration E (from entgegen, , the German word for "opposite"). If the two groups of higher priority are on the same side of the double bond (cis to each other), the bond is assigned the configuration Z (from zusammen, , the German word for "together").
The letters E and Z are conventionally printed in italic type, within parentheses, and separated from the rest of the name with a hyphen. They are always printed as full capitals (not in lowercase or small capitals), but do not constitute the first letter of the name for English capitalization rules (as in the example above).
Another example: The CIP rules assign a higher priority to bromine than to chlorine, and a higher priority to chlorine than to hydrogen, hence the following (possibly counterintuitive) nomenclature.
For organic molecules with multiple double bonds, it is sometimes necessary to indicate the alkene location for each E or Z symbol. For example, the chemical name of alitretinoin is (2E,4E,6Z,8E)-3,7-dimethyl-9-(2,6,6-trimethyl-1-cyclohexenyl)nona-2,4,6,8-tetraenoic acid, indicating that the alkenes starting at positions 2, 4, and 8 are E while the one starting at position 6 is Z.
See also
Descriptor (chemistry)
Geometric isomerism
Molecular geometry
Document 4:::
Wiswesser line notation (WLN), invented by William J. Wiswesser in 1949, was the first line notation capable of precisely describing complex molecules. It was the basis of ICI Ltd's CROSSBOW database system developed in the late 1960s. WLN allowed for indexing the Chemical Structure Index (CSI) at the Institute for Scientific Information (ISI). It was also the tool used to develop the CAOCI (Commercially Available Organic Chemical Intermediates) database, the datafile from which Accelrys' (successor to MDL) ACD file was developed. WLN is still being extensively used by BARK Information Services. Descriptions of how to encode molecules as WLN have been published in several books.
Examples
1H : methane
2H : ethane
3H : propane
1Y : isobutane
1X : neopentane
Q1 : methanol
1R : toluene
1V1 : acetone
2O2 : diethyl ether
1VR : acetophenone
ZR CVQ : 3-aminobenzoic acid
QVYZ1R : phenylalanine
QX2&2&2 : 3-ethylpentan-3-ol
QVY3&1VQ : 2-propylbutanedioic acid
L66J BMR& DSWQ IN1&1 : 6-dimethylamino-4-phenylamino-naphthalene-2-sulfonic acid
QVR-/G 5 : pentachlorobenzoic acid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the chemical formula for this molecule.
A. HCl
B. HClN
C. HC
D. H2Cl
Answer:
|
sciq-4996
|
multiple_choice
|
What type of rain dissolves and damages stone buildings and statues?
|
[
"plastic rain",
"morning rain",
"stored rain",
"acid rain"
] |
D
|
Relavent Documents:
Document 0:::
Surface runoff (also known as overland flow or terrestrial runoff) is the unconfined flow of water over the ground surface, in contrast to channel runoff (or stream flow). It occurs when excess rainwater, stormwater, meltwater, or other sources, can no longer sufficiently rapidly infiltrate in the soil. This can occur when the soil is saturated by water to its full capacity, and the rain arrives more quickly than the soil can absorb it. Surface runoff often occurs because impervious areas (such as roofs and pavement) do not allow water to soak into the ground. Furthermore, runoff can occur either through natural or human-made processes.
Surface runoff is a major component of the water cycle. It is the primary agent of soil erosion by water. The land area producing runoff that drains to a common point is called a drainage basin.
Runoff that occurs on the ground surface before reaching a channel can be a nonpoint source of pollution, as it can carry human-made contaminants or natural forms of pollution (such as rotting leaves). Human-made contaminants in runoff include petroleum, pesticides, fertilizers and others. Much agricultural pollution is exacerbated by surface runoff, leading to a number of down stream impacts, including nutrient pollution that causes eutrophication.
In addition to causing water erosion and pollution, surface runoff in urban areas is a primary cause of urban flooding, which can result in property damage, damp and mold in basements, and street flooding.
Generation
Surface runoff is defined as precipitation (rain, snow, sleet, or hail) that reaches a surface stream without ever passing below the soil surface. It is distinct from direct runoff, which is runoff that reaches surface streams immediately after rainfall or melting snowfall and excludes runoff generated by the melting of snowpack or glaciers.
Snow and glacier melt occur only in areas cold enough for these to form permanently. Typically snowmelt will peak in the spring and glacie
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
Document 3:::
Bioretention is the process in which contaminants and sedimentation are removed from stormwater runoff. The main objective of the bioretention cell is to attenuate peak runoff as well as to remove stormwater runoff pollutants.
Construction of a bioretention area
Stormwater is firstly directed into the designed treatment area, which conventionally consists of a sand bed (which serves as a transition to the actual soil), a filter media layer (which consists of layered materials of various composition), and plants atop the filter media. Various soil amendment such as water treatment residue (WTR), Coconut husk, biochar etc have been proposed over the years. These materials were reported to have enhanced performance in terms of pollutant removal. Runoff passes first over or through a sand bed, which slows the runoff's velocity, distributes it evenly along the length of the ponding area, which consists of a surface organic layer and/or groundcover and the underlying planting soil. Stored water in the bioretention area planting soil exfiltrates over a period of days into the underlying soils.
Filtration
Each of the components of the bioretention area is designed to perform a specific function. The grass buffer strip reduces incoming runoff velocity and filters particulates from the runoff. The sand bed also reduces the velocity, filters particulates, and spreads flow over the length of the bioretention area. Aeration and drainage of the planting soil are provided by the deep sand bed. The ponding area provides a temporary storage location for runoff prior to its evaporation or infiltration. Some particulates not filtered out by the grass filter strip or the sand bed settle within the ponding area.
The organic or mulch layer also filters pollutants and provides an environment conducive to the growth of microorganisms, which degrade petroleum-based products and other organic material. This layer acts in a similar way to the leaf litter in a forest and prevents the e
Document 4:::
A rainout is the process of precipitation causing the removal of radioactive particles from the atmosphere onto the ground, creating nuclear fallout by rain. The rainclouds of the rainout are often formed by the particles of a nuclear explosion itself and because of this, the decontamination of rainout is more difficult than a "dry" fallout.
In atmospheric science, rainout also refers to the removal of soluble species—not necessarily radioactive—from the atmosphere by precipitation.
Factors affecting rainout
A rainout could occur in the vicinity of ground zero or the contamination could be carried aloft before deposition depending on the current atmospheric conditions and how the explosion occurred. The explosion, or burst, can be air, surface, subsurface, or seawater. An air burst will produce less fallout than a comparable explosion near the ground due to less particulate being contaminated. Detonations at the surface will tend to produce more fallout material. In case of water surface bursts, the particles tend to be rather lighter and smaller, producing less local fallout but extending over a greater area. The particles contain mostly sea salts with some water; these can have a cloud seeding effect causing local rainout and areas of high local fallout. Fallout from a seawater burst is difficult to remove once it has soaked into porous surfaces because the fission products are present as metallic ions which become chemically bonded to many surfaces. For subsurface bursts, there is an additional phenomenon present called "base surge". The base surge is a cloud that rolls outward from the bottom of the subsiding column, which is caused by an excessive density of dust or water droplets in the air. This surge is made up of small solid particles, but it still behaves like a fluid. A soil earth medium favors base surge formation in an underground burst. Although the base surge typically contains only about 10% of the total bomb debris in a subsurface burst, it can cr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of rain dissolves and damages stone buildings and statues?
A. plastic rain
B. morning rain
C. stored rain
D. acid rain
Answer:
|
|
sciq-10612
|
multiple_choice
|
By 180 million years ago, pangaea began to do what?
|
[
"break up",
"combine",
"freeze",
"grow"
] |
A
|
Relavent Documents:
Document 0:::
The Mesozoic–Cenozoic Radiation is the third major extended increase of biodiversity in the Phanerozoic, after the Cambrian Explosion and the Great Ordovician Biodiversification Event, which appeared to exceeded the equilibrium reached after the Ordovician radiation. Made known by its identification in marine invertebrates, this evolutionary radiation began in the Mesozoic, after the Permian extinctions, and continues to this date. This spectacular radiation affected both terrestrial and marine flora and fauna, during which the “modern” fauna came to replace much of the Paleozoic fauna. Notably, this radiation event was marked by the rise of angiosperms during the mid-Cretaceous, and the K-Pg extinction, which initiated the rapid increase in mammalian biodiversity.
Causes and significance
The exact causes of this extended increase in biodiversity are still being debated, however, the Mesozoic-Cenozoic radiation has often been related to large-scale paleogeographical changes. The fragmentation of the supercontinent Pangaea has been related to an increase in both marine and terrestrial biodiversity. The link between the fragmentation of supercontinents and biodiversity was first proposed by Valentine and Moores in 1972. They hypothesized that the isolation of terrestrial environments and the partitioning of oceanic water masses, as a result of the breaking up of Pangaea, resulted in an increase in allopatric speciation, which led to an increased biodiversity. These smaller landmasses, while individually being less diverse than a supercontinent, contain a high degree of endemic species, resulting in an overall higher biodiversity than a single landmass of equivalent size. It is therefore argued that, similarly to the Ordovician bio-diversification, the differentiation of biotas along environmental gradients caused by the fragmentation of a supercontinent, was a driving force behind the Mesozoic-Cenozoic radiation.
Part of the dramatic increase in biodiversity during
Document 1:::
The history of life on Earth is closely associated with environmental change on multiple spatial and temporal scales. Climate change is a long-term change in the average weather patterns that have come to define Earth’s local, regional and global climates. These changes have a broad range of observed effects that are synonymous with the term. Climate change is any significant long term change in the expected pattern, whether due to natural variability or as a result of human activity. Predicting the effects that climate change will have on plant biodiversity can be achieved using various models, however bioclimatic models are most commonly used.
Environmental conditions play a key role in defining the function and geographic distributions of plants, in combination with other factors, thereby modifying patterns of biodiversity. Changes in long term environmental conditions that can be collectively coined climate change are known to have had enormous impacts on current plant diversity patterns; further impacts are expected in the future. It is predicted that climate change will remain one of the major drivers of biodiversity patterns in the future. Climate change is thought to be one of several factors causing the currently ongoing human-triggered mass extinction, which is changing the distribution and abundance of many plants.
Palaeo context
The Earth has experienced a constantly changing climate in the time since plants first evolved. In comparison to the present day, this history has seen Earth as cooler, warmer, drier and wetter, and (carbon dioxide) concentrations have been both higher and lower. These changes have been reflected by constantly shifting vegetation, for example forest communities dominating most areas in interglacial periods, and herbaceous communities dominating during glacial periods. It has been shown through fossil records that past climatic change has been a major driver of the processes of speciation and extinction. The best known example
Document 2:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 3:::
Timeline
Paleontology
Paleontology timelines
Document 4:::
The farming/language dispersal hypothesis proposes that many of the largest language families in the world dispersed along with the expansion of agriculture. This hypothesis was proposed by archaeologists Peter Bellwood and Colin Renfrew. It has been widely debated and archaeologists, linguists, and geneticists often disagree with all or part of the hypothesis.
The hypothesis
The farming/language dispersal hypothesis links the spread of farming in pre-historic times with the spread of languages and language families. The hypothesis is that a language family begins when a society with its own language adopts farming as a primary means of subsistence while its neighbors are hunter-gatherers who speak unrelated languages. A sedentary farming society supports a much greater density of population than its neighboring nomadic or semi-nomadic hunter-gatherers. The language of the farming society displaces that of the hunter-gatherer society which may also become agricultural. Farming and the language of the original farmers spread to more and more societies. In some cases the original language, which evolves over time into many different but related languages, has attained world-wide dispersion.
In sum, "the farming/language dispersal hypothesis makes the radical and controversial proposal that the present-day distributions of many of the world's languages and language families can be traced back to the early developments and dispersals of farming..."
Examples
Indo-European
The Anatolian hypothesis states that Proto-Indo-Europeans lived in Anatolia throughout the Neolithic period, and that the spread of the Indo-European language was associated with the Neolithic Revolution of the 7th-6th millennium BC. It claims that the Proto-Indo-European language spread from Asia Minor to Europe around 7000 BC with the Neolithic Revolution and peacefully mixed with indigenous peoples. Therefore, most Neolithic Europeans spoke an Indo-European language, and later migrations replaced
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
By 180 million years ago, pangaea began to do what?
A. break up
B. combine
C. freeze
D. grow
Answer:
|
|
sciq-9645
|
multiple_choice
|
The major component of what cellular structures is the phospholipid bilayer?
|
[
"nuclei",
"ribosomes",
"cytoplasm",
"cell membranes"
] |
D
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 2:::
A bilayer is a double layer of closely packed atoms or molecules.
The properties of bilayers are often studied in condensed matter physics, particularly in the context of semiconductor devices, where two distinct materials are united to form junctions, such as p–n junctions, Schottky junctions, etc. Layered materials, such as graphene, boron nitride, or transition metal dichalcogenides, have unique electronic properties as a bilayer system and are an active area of current research.
In biology a common example is the lipid bilayer, which describes the structure of multiple organic structures, such as the membrane of a cell.
See also
Monolayer
Non-carbon nanotube
Semiconductor
Thin film
Document 3:::
Cell theory has its origins in seventeenth century microscopy observations, but it was nearly two hundred years before a complete cell membrane theory was developed to explain what separates cells from the outside world. By the 19th century it was accepted that some form of semi-permeable barrier must exist around a cell. Studies of the action of anesthetic molecules led to the theory that this barrier might be made of some sort of fat (lipid), but the structure was still unknown. A series of pioneering experiments in 1925 indicated that this barrier membrane consisted of two molecular layers of lipids—a lipid bilayer. New tools over the next few decades confirmed this theory, but controversy remained regarding the role of proteins in the cell membrane. Eventually the fluid mosaic model was composed in which proteins “float” in a fluid lipid bilayer "sea". Although simplistic and incomplete, this model is still widely referenced today.
[It is found in 1838.]]
Early barrier theories
Since the invention of the microscope in the seventeenth century it has been known that plant and animal tissue is composed of cells : the cell was discovered by Robert Hooke. The plant cell wall was easily visible even with these early microscopes but no similar barrier was visible on animal cells, though it stood to reason that one must exist. By the mid 19th century, this question was being actively investigated and Moritz Traube noted that this outer layer must be semipermeable to allow transport of ions. Traube had no direct evidence for the composition of this film, though, and incorrectly asserted that it was formed by an interfacial reaction of the cell protoplasm with the extracellular fluid.
The lipid nature of the cell membrane was first correctly intuited by Georg Hermann Quincke in 1888, who noted that a cell generally forms a spherical shape in water and, when broken in half, forms two smaller spheres. The only other known material to exhibit this behavior was oil. He al
Document 4:::
This is a list of articles on biophysics.
0–9
5-HT3 receptor
A
ACCN1
ANO1
AP2 adaptor complex
Aaron Klug
Acid-sensing ion channel
Activating function
Active transport
Adolf Eugen Fick
Afterdepolarization
Aggregate modulus
Aharon Katzir
Alan Lloyd Hodgkin
Alexander Rich
Alexander van Oudenaarden
Allan McLeod Cormack
Alpha-3 beta-4 nicotinic receptor
Alpha-4 beta-2 nicotinic receptor
Alpha-7 nicotinic receptor
Alpha helix
Alwyn Jones (biophysicist)
Amoeboid movement
Andreas Mershin
Andrew Huxley
Animal locomotion
Animal locomotion on the water surface
Anita Goel
Antiporter
Aquaporin 2
Aquaporin 3
Aquaporin 4
Archibald Hill
Ariel Fernandez
Arthropod exoskeleton
Arthropod leg
Avery Gilbert
B
BEST2
BK channel
Bacterial outer membrane
Balance (ability)
Bat
Bat wing development
Bert Sakmann
Bestrophin 1
Biased random walk (biochemistry)
Bioelectrochemical reactor
Bioelectrochemistry
Biofilm
Biological material
Biological membrane
Biomechanics
Biomechanics of sprint running
Biophysical Society
Biophysics
Bird flight
Bird migration
Bisindolylmaleimide
Bleb (cell biology)
Boris Pavlovich Belousov
Brian Matthews (biochemist)
Britton Chance
Brush border
Bulk movement
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The major component of what cellular structures is the phospholipid bilayer?
A. nuclei
B. ribosomes
C. cytoplasm
D. cell membranes
Answer:
|
|
sciq-7002
|
multiple_choice
|
Each lymph organ has a different job in what system?
|
[
"circulatory",
"respiratory",
"immune",
"nervous"
] |
C
|
Relavent Documents:
Document 0:::
The lymphatic system, or lymphoid system, is an organ system in vertebrates that is part of the immune system, and complementary to the circulatory system. It consists of a large network of lymphatic vessels, lymph nodes, lymphoid organs, lymphoid tissues and lymph. Lymph is a clear fluid carried by the lymphatic vessels back to the heart for re-circulation. (The Latin word for lymph, lympha, refers to the deity of fresh water, "Lympha").
Unlike the circulatory system that is a closed system, the lymphatic system is open. The human circulatory system processes an average of 20 litres of blood per day through capillary filtration, which removes plasma from the blood. Roughly 17 litres of the filtered blood is reabsorbed directly into the blood vessels, while the remaining three litres are left in the interstitial fluid. One of the main functions of the lymphatic system is to provide an accessory return route to the blood for the surplus three litres.
The other main function is that of immune defense. Lymph is very similar to blood plasma, in that it contains waste products and cellular debris, together with bacteria and proteins. The cells of the lymph are mostly lymphocytes. Associated lymphoid organs are composed of lymphoid tissue, and are the sites either of lymphocyte production or of lymphocyte activation. These include the lymph nodes (where the highest lymphocyte concentration is found), the spleen, the thymus, and the tonsils. Lymphocytes are initially generated in the bone marrow. The lymphoid organs also contain other types of cells such as stromal cells for support. Lymphoid tissue is also associated with mucosas such as mucosa-associated lymphoid tissue (MALT).
Fluid from circulating blood leaks into the tissues of the body by capillary action, carrying nutrients to the cells. The fluid bathes the tissues as interstitial fluid, collecting waste products, bacteria, and damaged cells, and then drains as lymph into the lymphatic capillaries and lymphatic
Document 1:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
Lymph node stromal cells are essential to the structure and function of the lymph node whose functions include: creating an internal tissue scaffold for the support of hematopoietic cells; the release of small molecule chemical messengers that facilitate interactions between hematopoietic cells; the facilitation of the migration of hematopoietic cells; the presentation of antigens to immune cells at the initiation of the adaptive immune system; and the homeostasis of lymphocyte numbers. Stromal cells originate from multipotent mesenchymal stem cells.
Structure
Lymph nodes are enclosed in an external fibrous capsule, from which thin walls of sinew called trabeculae penetrate into the lymph node, partially dividing it. Beneath the external capsule and along the courses of the trabeculae, are peritrabecular and subcapsular sinuses. These sinuses are cavities containing macrophages (specialised cells which help to keep the extracellular matrix in order).
The interior of the lymph node has two regions: the cortex and the medulla. In the cortex, lymphoid tissue is organized into nodules. In the nodules, T lymphocytes are located in the T cell zone. B lymphocytes are located in the B cell follicle. The primary B cell follicle matures in germinal centers. In the medulla are hematopoietic cells (which contribute to the formation of the blood) and stromal cells.
Near the medulla is the hilum of lymph node. This is the place where blood vessels enter and leave the lymph node and lymphatic vessels leave the lymph node. Lymph vessels entering the node do so along the perimeter (outer surface).
Function
The lymph nodes, the spleen and Peyer's patches, together are known as secondary lymphoid organs. Lymph nodes are found between lymphatic ducts and blood vessels. Afferent lymphatic vessels bring lymph fluid from the peripheral tissues to the lymph nodes. The lymph tissue in the lymph nodes consists of immune cells (95%), for example lymphocytes, and stromal cells (1% to
Document 4:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Each lymph organ has a different job in what system?
A. circulatory
B. respiratory
C. immune
D. nervous
Answer:
|
|
sciq-7824
|
multiple_choice
|
The location of an object in a frame of reference is called what?
|
[
"position",
"change",
"fact",
"marker"
] |
A
|
Relavent Documents:
Document 0:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 1:::
In physics and astronomy, a frame of reference (or reference frame) is an abstract coordinate system whose origin, orientation, and scale are specified by a set of reference points―geometric points whose position is identified both mathematically (with numerical coordinate values) and physically (signaled by conventional markers).
For n dimensions, reference points are sufficient to fully define a reference frame. Using rectangular Cartesian coordinates, a reference frame may be defined with a reference point at the origin and a reference point at one unit distance along each of the n coordinate axes.
In Einsteinian relativity, reference frames are used to specify the relationship between a moving observer and the phenomenon under observation. In this context, the term often becomes observational frame of reference (or observational reference frame), which implies that the observer is at rest in the frame, although not necessarily located at its origin. A relativistic reference frame includes (or implies) the coordinate time, which does not equate across different reference frames moving relatively to each other. The situation thus differs from Galilean relativity, in which all possible coordinate times are essentially equivalent.
Definition
The need to distinguish between the various meanings of "frame of reference" has led to a variety of terms. For example, sometimes the type of coordinate system is attached as a modifier, as in Cartesian frame of reference. Sometimes the state of motion is emphasized, as in rotating frame of reference. Sometimes the way it transforms to frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished by the scale of their observations, as in macroscopic and microscopic frames of reference.
In this article, the term observational frame of reference is used when emphasis is upon the state of motion rather than upon the coordinate choice or the character of the observations or o
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
In theoretical physics, a local reference frame (local frame) refers to a coordinate system or frame of reference that is only expected to function over a small region or a restricted region of space or spacetime.
The term is most often used in the context of the application of local inertial frames to small regions of a gravitational field. Although gravitational tidal forces will cause the background geometry to become noticeably non-Euclidean over larger regions, if we restrict ourselves to a sufficiently small region containing a cluster of objects falling together in an effectively uniform gravitational field, their physics can be described as the physics of that cluster in a space free from explicit background gravitational effects.
Equivalence principle
When constructing his general theory of relativity, Einstein made the following observation: a freely falling object in a gravitational field will not be able to detect the existence of the field by making local measurements ("a falling man feels no gravity"). Einstein was then able to complete his general theory by arguing that the physics of curved spacetime must reduce over small regions to the physics of simple inertial mechanics (in this case special relativity) for small freefalling regions.
Einstein referred to this as "the happiest idea of my life".
Laboratory frame
In physics, the laboratory frame of reference, or lab frame for short, is a frame of reference centered on the laboratory in which the experiment (either real or thought experiment) is done. This is the reference frame in which the laboratory is at rest. Also, this is usually the frame of reference in which measurements are made, since they are presumed (unless stated otherwise) to be made by laboratory instruments. An example of instruments in a lab frame, would be the particle detectors at the detection facility of a particle accelerator.
See also
Breit frame
Center-of-mass frame
Frame bundle
Inertial frame of reference
Local coo
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The location of an object in a frame of reference is called what?
A. position
B. change
C. fact
D. marker
Answer:
|
|
sciq-9129
|
multiple_choice
|
In angiosperms, flowers and fruits are adaptations essential for what process?
|
[
"photosynthesis",
"reproduction",
"death",
"variation"
] |
B
|
Relavent Documents:
Document 0:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 1:::
Organography (from Greek , organo, "organ"; and , -graphy) is the scientific description of the structure and function of the organs of living things.
History
Organography as a scientific study starts with Aristotle, who considered the parts of plants as "organs" and began to consider the relationship between different organs and different functions. In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
In the following century Caspar Friedrich Wolff was able to follow the development of organs from the "growing points" or apical meristems. He noted the commonality of development between foliage leaves and floral leaves (e.g. petals) and wrote: "In the whole plant, whose parts we wonder at as being, at the first glance, so extraordinarily diverse, I finally perceive and recognize nothing beyond leaves and stem (for the root may be regarded as a stem). Consequently all parts of the plant, except the stem, are modified leaves."
Similar views were propounded at by Goethe in his well-known treatise. He wrote: "The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants."
See also
morphology (biology)
Document 2:::
Plant genetics is the study of genes, genetic variation, and heredity specifically in plants. It is generally considered a field of biology and botany, but intersects frequently with many other life sciences and is strongly linked with the study of information systems. Plant genetics is similar in many ways to animal genetics but differs in a few key areas.
The discoverer of genetics was Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied "trait inheritance", patterns in the way traits are handed down from parents to offspring. He observed that organisms (most famously pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene. Much of Mendel's work with plants still forms the basis for modern plant genetics.
Plants, like all known organisms, use DNA to pass on their traits. Animal genetics often focuses on parentage and lineage, but this can sometimes be difficult in plant genetics due to the fact that plants can, unlike most animals, be self-fertile. Speciation can be easier in many plants due to unique genetic abilities, such as being well adapted to polyploidy. Plants are unique in that they are able to produce energy-dense carbohydrates via photosynthesis, a process which is achieved by use of chloroplasts. Chloroplasts, like the superficially similar mitochondria, possess their own DNA. Chloroplasts thus provide an additional reservoir for genes and genetic diversity, and an extra layer of genetic complexity not found in animals.
The study of plant genetics has major economic impacts: many staple crops are genetically modified to increase yields, confer pest and disease resistance, provide resistance to herbicides, or to increase their nutritional value.
History
The earliest evidence of plant domestication found has been dated to 11,000 years before present in ancestral wheat. While initially selection may have happene
Document 3:::
Fruit tree propagation is usually carried out vegetatively (non-sexually) by grafting or budding a desired variety onto a suitable rootstock.
Perennial plants can be propagated either by sexual or vegetative means. Sexual reproduction begins when a male germ cell (pollen) from one flower fertilises a female germ cell (ovule, incipient seed) of the same species, initiating the development of a fruit containing seeds. Each seed, when germinated, can grow to become a new specimen tree. However, the new tree inherits characteristics of both its parents, and it will not grow true to the variety of either parent from which it came. That is, it will be a fresh individual with an unpredictable combination of characteristics of its own. Although this is desirable in terms of producing novel combinations from the richness of the gene pool of the two parent plants (such sexual recombination is the source of new cultivars), only rarely will the resulting new fruit tree be directly useful or attractive to the tastes of humankind. Most new plants will have characteristics that lie somewhere between those of the two parents.
Therefore, from the orchard grower or gardener's point of view, it is preferable to propagate fruit cultivars vegetatively in order to ensure reliability. This involves taking a cutting (or scion) of wood from a desirable parent tree which is then grown on to produce a new plant or "clone" of the original. In effect this means that the original Bramley apple tree, for example, was a successful variety grown from a pip, but that every Bramley since then has been propagated by taking cuttings of living matter from that tree, or one of its descendants.
Methods
The simplest method of propagating a tree vegetatively is rooting or taking cuttings. A cutting (usually a piece of stem of the parent plant) is cut off and stuck into soil. Artificial rooting hormones are sometimes used to improve chances of success. If the cutting does not die from rot-inducing fungi o
Document 4:::
Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics.
Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes.
Applications
Plant sciences
In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere.
Standards, methods, tools, and instrumentation
A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In angiosperms, flowers and fruits are adaptations essential for what process?
A. photosynthesis
B. reproduction
C. death
D. variation
Answer:
|
|
sciq-1684
|
multiple_choice
|
Nutrients from food are absorbed by the blood for transport around the body as part of what system?
|
[
"respiratory",
"growth",
"circulatory",
"digestive"
] |
D
|
Relavent Documents:
Document 0:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 1:::
Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management.
Constituents of diet
Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt
Document 2:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 3:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
Document 4:::
Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development.
Overview
The human body contains chemical compounds such as water, carbohydrates, amino acids (found in proteins), fatty acids (found in lipids), and nucleic acids (DNA and RNA). These compounds are composed of elements such as carbon, hydrogen, oxygen, nitrogen, and phosphorus. Any study done to determine nutritional status must take into account the state of the body before and after experiments, as well as the chemical composition of the whole diet and of all the materials excreted and eliminated from the body (including urine and feces).
Nutrients
The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients but do not provide energy. The micronutrients are minerals and vitamins.
The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats prov
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Nutrients from food are absorbed by the blood for transport around the body as part of what system?
A. respiratory
B. growth
C. circulatory
D. digestive
Answer:
|
|
ai2_arc-26
|
multiple_choice
|
Stars are often classified by their apparent brightness in the nighttime sky. Stars can also be classified in many other ways. Which of these is least useful in classifying stars?
|
[
"visible color",
"composition",
"surface texture",
"temperature"
] |
C
|
Relavent Documents:
Document 0:::
A color–color diagram is a means of comparing the colors of an astronomical object at different wavelengths. Astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. The difference in brightness between two bands is referred to as color. On color–color diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and the color defined by another brightness difference will be plotted on the vertical axis.
Background
Although stars are not perfect blackbodies, to first order the spectra of light emitted by stars conforms closely to a black-body radiation curve, also referred to sometimes as a thermal radiation curve. The overall shape of a black-body curve is uniquely determined by its temperature, and the wavelength of peak intensity is inversely proportional to temperature, a relation known as Wien's Displacement Law. Thus, observation of a stellar spectrum allows determination of its effective temperature. Obtaining complete spectra for stars through spectrometry is much more involved than simple photometry in a few bands. Thus by comparing the magnitude of the star in multiple different color indices, the effective temperature of the star can still be determined, as magnitude differences between each color will be unique for that temperature. As such, color-color diagrams can be used as a means of representing the stellar population, much like a Hertzsprung–Russell diagram, and stars of different spectral classes will inhabit different parts of the diagram. This feature leads to applications within various wavelength bands.
In the stellar locus, stars tend to align in a more or less straight feature. If stars were perfect black bodies, the stellar locus would be a pure straight line indeed. The divergences with the straight line are due to the absorptions and emission lines in the stellar spectra. These divergences can be more or less evident depending
Document 1:::
Starspots are stellar phenomena, so-named by analogy with sunspots.
Spots as small as sunspots have not been detected on other stars, as they would cause undetectably small fluctuations in brightness. The commonly observed starspots are in general much larger than those on the Sun: up to about 30% of the stellar surface may be covered, corresponding to starspots 100 times larger than those on the Sun.
Detection and measurements
To detect and measure the extent of starspots one uses several types of methods.
For rapidly rotating stars – Doppler imaging and Zeeman-Doppler imaging. With the Zeeman-Doppler imaging technique the direction of the magnetic field on stars can be determined since spectral lines are split according to the Zeeman effect, revealing the direction and magnitude of the field.
For slowly rotating stars – Line Depth Ratio (LDR). Here one measures two different spectral lines, one sensitive to temperature and one which is not. Since starspots have a lower temperature than their surroundings the temperature-sensitive line changes its depth. From the difference between these two lines the temperature and size of the spot can be calculated, with a temperature accuracy of 10K.
For eclipsing binary stars – Eclipse mapping produces images and maps of spots on both stars.
For giant binary stars - Very-long-baseline interferometry
For stars with transiting extrasolar planets – Light curve variations.
Temperature
Observed starspots have a temperature which is in general 500–2000 kelvins cooler than the stellar photosphere. This temperature difference could give rise to a brightness variation up to 0.6 magnitudes between the spot and the surrounding surface. There also seems to be a relation between the spot temperature and the temperature for the stellar photosphere, indicating that starspots behave similarly for different types of stars (observed in G–K dwarfs).
Lifetimes
The lifetime for a starspot depends on its size.
For small spots the lifetim
Document 2:::
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl
Document 3:::
In astronomy, a spectral atlas is a collection of spectra of one or more objects, intended as a reference work for comparison with spectra of other objects. Several different types of collections are titled spectral atlases: those intended for spectral classification, for key reference, or as a collection of spectra of a general type of object.
In any spectral atlas, generally all the spectra have been taken with the same equipment, or with very similar instruments at different locations, to provide data as uniform as possible in its spectral resolution, wavelength coverage, noise characteristics, etc.
Types
For spectral classification
When assigning a spectral classification, a spectral atlas is a collection of standard spectra of stars with known spectral types, against which a spectrum of an unknown star is compared. It is analogous to an identification key in biology. Originally, such atlases included reproductions of the monochrome spectra as recorded on photographic plates, as in the original Morgan-Keenan-Kellman atlas and other atlases. These atlases include identifications and notations for use of those spectral features to be used as discriminators between close spectral types. With very large surveys of the sky which include automated assignment of spectral classification from the digital spectra data, graphical atlases have been supplanted by libraries of spectra of standard stars which often can be downloaded from VizieR and other sources.
For key reference
A spectral atlas can be a very high-quality spectrum of a key reference object, often made with very high spectral resolution, generally presented in large-format graphical form as a line chart (but normally strictly without markers at specific data points) of intensity or relative intensity (which for a star whose spectrum is dominated by absorption lines runs from zero to a normalized continuum) as a function of wavelength. Such spectral atlases have been made several times for the Sun (e
Document 4:::
Red supergiants (RSGs) are stars with a supergiant luminosity class (Yerkes class I) of spectral type K or M. They are the largest stars in the universe in terms of volume, although they are not the most massive or luminous. Betelgeuse and Antares A are the brightest and best known red supergiants (RSGs), indeed the only first magnitude red supergiant stars.
Classification
Stars are classified as supergiants on the basis of their spectral luminosity class. This system uses certain diagnostic spectral lines to estimate the surface gravity of a star, hence determining its size relative to its mass. Larger stars are more luminous at a given temperature and can now be grouped into bands of differing luminosity.
The luminosity differences between stars are most apparent at low temperatures, where giant stars are much brighter than main-sequence stars. Supergiants have the lowest surface gravities and hence are the largest and brightest at a particular temperature.
The Yerkes or Morgan-Keenan (MK) classification system is almost universal. It groups stars into five main luminosity groups designated by roman numerals:
I supergiant;
II bright giant;
III giant;
IV subgiant;
V dwarf (main sequence).
Specific to supergiants, the luminosity class is further divided into normal supergiants of class Ib and brightest supergiants of class Ia. The intermediate class Iab is also used. Exceptionally bright, low surface gravity, stars with strong indications of mass loss may be designated by luminosity class 0 (zero) although this is rarely seen. More often the designation Ia-0 will be used, and more commonly still Ia+. These hypergiant spectral classifications are very rarely applied to red supergiants, although the term red hypergiant is sometimes used for the most extended and unstable red supergiants like VY Canis Majoris and NML Cygni.
The "red" part of "red supergiant" refers to the cool temperature. Red supergiants are the coolest supergiants, M-type, and at le
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Stars are often classified by their apparent brightness in the nighttime sky. Stars can also be classified in many other ways. Which of these is least useful in classifying stars?
A. visible color
B. composition
C. surface texture
D. temperature
Answer:
|
|
sciq-1891
|
multiple_choice
|
What is found at the top of the stamen?
|
[
"pollen",
"pistil",
"fungi",
"petals"
] |
A
|
Relavent Documents:
Document 0:::
The stamen (: stamina or stamens) is the pollen-producing reproductive organ of a flower. Collectively the stamens form the androecium.
Morphology and terminology
A stamen typically consists of a stalk called the filament and an anther which contains microsporangia. Most commonly anthers are two-lobed (each lobe is termed a locule) and are attached to the filament either at the base or in the middle area of the anther. The sterile tissue between the lobes is called the connective, an extension of the filament containing conducting strands. It can be seen as an extension on the dorsal side of the anther. A pollen grain develops from a microspore in the microsporangium and contains the male gametophyte. The size of anthers differs greatly, from a tiny fraction of a millimeter in Wolfia spp up to five inches (13 centimeters) in Canna iridiflora and Strelitzia nicolai.
The stamens in a flower are collectively called the androecium. The androecium can consist of as few as one-half stamen (i.e. a single locule) as in Canna species or as many as 3,482 stamens which have been counted in the saguaro (Carnegiea gigantea). The androecium in various species of plants forms a great variety of patterns, some of them highly complex. It generally surrounds the gynoecium and is surrounded by the perianth. A few members of the family Triuridaceae, particularly Lacandonia schismatica and Lacandonia braziliana, along with a few species of Trithuria (family Hydatellaceae) are exceptional in that their gynoecia surround their androecia.
Etymology
Stamen is the Latin word meaning "thread" (originally thread of the warp, in weaving).
Filament derives from classical Latin filum, meaning "thread"
Anther derives from French anthère, from classical Latin anthera, meaning "medicine extracted from the flower" in turn from Ancient Greek ἀνθηρά (), feminine of ἀνθηρός () meaning "flowery", from ἄνθος () meaning "flower"
Androecium (: androecia) derives from Ancient Greek ἀνήρ () meanin
Document 1:::
In botany, a staminode is an often rudimentary, sterile or abortive stamen, which means that it does not produce pollen. Staminodes are frequently inconspicuous and stamen-like, usually occurring at the inner whorl of the flower, but are also sometimes long enough to protrude from the corolla.
Sometimes, the staminodes are modified to produce nectar, as in the witch hazel (Hamamelis).
Staminodes can be a critical characteristic for differentiating between species, for instance in the orchid genus Paphiopedilum, and among the penstemons.
In the case of Cannas, the petals are inconsequential and the staminodes are refined into eye-catching petal-like replacements.
A spectacular example of staminode is given by Couroupita guianensis, a tropical tree growing in South America also known as cannonball tree.
Document 2:::
Pollen is a powdery substance produced by most types of flowers of seed plants for the purpose of sexual reproduction. It consists of pollen grains (highly reduced microgametophytes), which produce male gametes (sperm cells). Pollen grains have a hard coat made of sporopollenin that protects the gametophytes during the process of their movement from the stamens to the pistil of flowering plants, or from the male cone to the female cone of gymnosperms. If pollen lands on a compatible pistil or female cone, it germinates, producing a pollen tube that transfers the sperm to the ovule containing the female gametophyte. Individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is highly useful in paleoecology, paleontology, archaeology, and forensics.
Pollen in plants is used for transferring haploid male genetic material from the anther of a single flower to the stigma of another in cross-pollination. In a case of self-pollination, this process takes place from the anther of a flower to the stigma of the same flower.
Pollen is infrequently used as food and food supplement. Because of agricultural practices, it is often contaminated by agricultural pesticides.
Structure and formation
Pollen itself is not the male gamete. It is a gametophyte, something that could be considered an entire organism, which then produces the male gamete. Each pollen grain contains vegetative (non-reproductive) cells (only a single cell in most flowering plants but several in other seed plants) and a generative (reproductive) cell. In flowering plants the vegetative tube cell produces the pollen tube, and the generative cell divides to form the two sperm nuclei.
Pollen comes in many different shapes. Some pollen grains are based on geodesic polyhedra like a soccer ball.
Formation
Pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower. Pollen g
Document 3:::
Gynoecium (; ; : gynoecia) is most commonly used as a collective term for the parts of a flower that produce ovules and ultimately develop into the fruit and seeds. The gynoecium is the innermost whorl of a flower; it consists of (one or more) pistils and is typically surrounded by the pollen-producing reproductive organs, the stamens, collectively called the androecium. The gynoecium is often referred to as the "female" portion of the flower, although rather than directly producing female gametes (i.e. egg cells), the gynoecium produces megaspores, each of which develops into a female gametophyte which then produces egg cells.
The term gynoecium is also used by botanists to refer to a cluster of archegonia and any associated modified leaves or stems present on a gametophyte shoot in mosses, liverworts, and hornworts. The corresponding terms for the male parts of those plants are clusters of antheridia within the androecium. Flowers that bear a gynoecium but no stamens are called pistillate or carpellate. Flowers lacking a gynoecium are called staminate.
The gynoecium is often referred to as female because it gives rise to female (egg-producing) gametophytes; however, strictly speaking sporophytes do not have a sex, only gametophytes do. Gynoecium development and arrangement is important in systematic research and identification of angiosperms, but can be the most challenging of the floral parts to interpret.
Introduction
Unlike most animals, plants grow new organs after embryogenesis, including new roots, leaves, and flowers. In the flowering plants, the gynoecium develops in the central region of the flower as a carpel or in groups of fused carpels. After fertilization, the gynoecium develops into a fruit that provides protection and nutrition for the developing seeds, and often aids in their dispersal. The gynoecium has several specialized tissues. The tissues of the gynoecium develop from genetic and hormonal interactions along three-major axes. These tissue
Document 4:::
Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb.
Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage.
Modified stems
Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers.
Detailed description of edible plant stems
Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the
Bamboo The edible portion is the young shoot (culm).
Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods.
Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves.
Cauliflower The edible portion is proliferated peduncle and flower tissue.
Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice.
Fig The edible portion is stem tissue. The
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is found at the top of the stamen?
A. pollen
B. pistil
C. fungi
D. petals
Answer:
|
|
sciq-8452
|
multiple_choice
|
Fuels according to the law of what, can never actually be “consumed”; it can only be changed from one form to another?
|
[
"difference of energy",
"expansion of energy",
"use of energy",
"conservation of energy"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission.
Design intent
The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example).
In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology.
General characteristics
When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could
Document 2:::
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Energy statistics refers to collecting, compiling, analyzing and disseminating data on commodities such as coal, crude oil, natural gas, electricity, or renewable energy sources (biomass, geothermal, wind or solar energy), when they are used for the energy they contain. Energy is the capability of some substances, resulting from their physico-chemical properties, to do work or produce heat. Some energy commodities, called fuels, release their energy content as heat when they burn. This heat could be used to run an internal or external combustion engine.
The need to have statistics on energy commodities became obvious during the 1973 oil crisis that brought tenfold increase in petroleum prices. Before the crisis, to have accurate data on global energy supply and demand was not deemed critical. Another concern of energy statistics today is a huge gap in energy use between developed and developing countries. As the gap narrows (see picture), the pressure on energy supply increases tremendously.
The data on energy and electricity come from three principal sources:
Energy industry
Other industries ("self-producers")
Consumers
The flows of and trade in energy commodities are measured both in physical units (e.g., metric tons), and, when energy balances are calculated, in energy units (e.g., terajoules or tons of oil equivalent). What makes energy statistics specific and different from other fields of economic statistics is the fact that energy commodities undergo greater number of transformations (flows) than other commodities. In these transformations energy is conserved, as defined by and within the limitations of the first and second laws of thermodynamics.
See also
Energy system
World energy resources and consumption
External links
Statistical Energy Database Review: Enerdata Yearbook 2012
International Energy Agency: Statistics
United Nations: Energy Statistics
The Oslo Group on Energy Statistics
DOE Energy Information Administration
Year of Ener
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Fuels according to the law of what, can never actually be “consumed”; it can only be changed from one form to another?
A. difference of energy
B. expansion of energy
C. use of energy
D. conservation of energy
Answer:
|
|
sciq-8457
|
multiple_choice
|
Which metalloids are found in the nitrogen group?
|
[
"arsenic and antimony",
"sulfur and polonium",
"polonium and antimony",
"selenium and antimony"
] |
A
|
Relavent Documents:
Document 0:::
The purpose of this annotated list is to provide a chronological, consolidated list of nonmetal monographs, which could enable the interested reader to further trace classification approaches in this area. Those marked with a ▲ classify the following 14 elements as nonmetals: H, N; O, S; the stable halogens; and the noble gases.
Document 1:::
In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts.
In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects.
In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae.
General chemistry
In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism.
The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture.
Analytical chemistry
In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which
have soluble chlorides; and
are not precipitated
Document 2:::
The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive.
The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others.
Early history
Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy.
A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century.
First categorizations
The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover
Document 3:::
In materials science, MXenes are a class of two-dimensional inorganic compounds , that consist of atomically thin layers of transition metal carbides, nitrides, or carbonitrides. MXenes accept a variety of hydrophilic terminations. MXenes were first reported in 2012.
Structure
As-synthesized MXenes prepared via HF etching have an accordion-like morphology, which can be referred to as multi-layer MXene (ML-MXene), or few-layer MXene (FL-MXene) given fewer than five layers. Because the surfaces of MXenes can be terminated by functional groups, the naming convention Mn+1XnTx can be used, where T is a functional group (e.g. O, F, OH, Cl).
Mono transition
MXenes adopt three structures with one metal on the M site, as inherited from the parent MAX phases: M2C, M3C2, and M4C3. They are produced by selectively etching out the A element from a MAX phase or other layered precursor (e.g., Mo2Ga2C), which has the general formula Mn+1AXn, where M is an early transition metal, A is an element from group 13 or 14 of the periodic table, X is C and/or N, and n = 1–4. MAX phases have a layered hexagonal structure with P63/mmc symmetry, where M layers are nearly closed packed and X atoms fill octahedral sites. Therefore, Mn+1Xn layers are interleaved with the A element, which is metallically bonded to the M element.
Double transition
Double transition metal MXenes can take two forms, ordered double transition metal MXenes or solid solution MXenes. For ordered double transition metal MXenes, they have the general formulas: M'2M"C2 or M'2M"2C3 where M' and M" are different transition metals. Double transition metal carbides that have been synthesized include Mo2TiC2, Mo2Ti2C3, Cr2TiC2, and Mo4VC4. In some of these MXenes (such as Mo2TiC2, Mo2Ti2C3, and Cr2TiC2), the Mo or Cr atoms are on outer edges of the MXene and these atoms control electrochemical properties of the MXenes.
Document 4:::
A trace element is a chemical element of a minute quantity, a trace amount, especially used in referring to a micronutrient, but is also used to refer to minor elements in the composition of a rock, or other chemical substance.
In nutrition, trace elements are classified into two groups: essential trace elements, and non-essential trace elements. Essential trace elements are needed for many physiological and biochemical processes in both plants and animals. Not only do trace elements play a role in biological processes but they also serve as catalysts to engage in redox – oxidation and reduction mechanisms. Trace elements of some heavy metals have a biological role as essential micronutrients.
Types
The two types of trace element in biochemistry are classed as essential or non-essential.
Essential trace elements
An essential trace element is a dietary element, a mineral that is only needed in minute quantities for the proper growth, development, and physiology of the organism. The essential trace elements are those that are required to perform vital metabolic activities in organisms. Essential trace elements in human nutrition, and other animals include iron (Fe) (hemoglobin), copper (Cu) (respiratory pigments), cobalt (Co) (Vitamin B12), iodine, manganese (Mn) and zinc (Zn) (enzymes). Although they are essential, they become toxic at high concentrations.
Non-essential trace elements
Non-essential trace elements include silver (Ag), arsenic (As), cadmium (Cd), chromium (Cr), mercury (Hg), lead (Pb), and tin (Sn), and have no known biological function, with toxic effects even at low concentration.
The structural components of cells and tissues that are required in the diet in gram quantities daily are known as bulk elements.
See also
Antinutrient
Bowen's Kale
Geotraces
List of micronutrients
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which metalloids are found in the nitrogen group?
A. arsenic and antimony
B. sulfur and polonium
C. polonium and antimony
D. selenium and antimony
Answer:
|
|
sciq-11135
|
multiple_choice
|
What is the hypothesis that states that the biosphere is its own living organism?
|
[
"Pascal's hypothesis",
"Big Bang theory",
"gaia hypothesis",
"Geiger theory"
] |
C
|
Relavent Documents:
Document 0:::
The Gaia hypothesis (), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet.
The Gaia hypothesis was formulated by the chemist James Lovelock and co-developed by the microbiologist Lynn Margulis in the 1970s. Following the suggestion by his neighbour, novelist William Golding, Lovelock named the hypothesis after Gaia, the primordial deity who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis.
Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth.
The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology. Even so, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence.
Overview
Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, Ages of Gaia, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life.
A reduced version of the hypothesis has been called "influenti
Document 1:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 2:::
A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts.
Types
In general, biosignatures can be grouped into ten broad categories:
Isotope patterns: Isotopic evidence or patterns that require biological processes.
Chemistry: Chemical features that require biological activity.
Organic matter: Organics formed by biological processes.
Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite).
Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films.
Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms.
Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence.
Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely.
Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale.
Technosignatures: Signatures that indicate a technologically advanced civilization.
Viability
Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat
Document 3:::
Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments.
Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them.
Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment.
History
The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti
Document 4:::
The biosphere (from Greek βίος bíos "life" and σφαῖρα sphaira "sphere"), also known as the ecosphere (from Greek οἶκος oîkos "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago.
In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons.
Origin and use of the term
The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells.
While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences.
Narrow definition
Geochemists define the biosphere as
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the hypothesis that states that the biosphere is its own living organism?
A. Pascal's hypothesis
B. Big Bang theory
C. gaia hypothesis
D. Geiger theory
Answer:
|
|
sciq-9157
|
multiple_choice
|
Screws move objects to a higher elevation by increasing what?
|
[
"velocity",
"kinetic energy",
"force applied",
"torque"
] |
C
|
Relavent Documents:
Document 0:::
A jackscrew, or screw jack, is a type of jack that is operated by turning a leadscrew. It is commonly used to lift moderately and heavy weights, such as vehicles; to raise and lower the horizontal stabilizers of aircraft; and as adjustable supports for heavy loads, such as the foundations of houses.
Description
A screw jack consists of a heavy-duty vertical screw with a load table mounted on its top, which screws into a threaded hole in a stationary support frame with a wide base resting on the ground. A rotating collar on the head of the screw has holes into which the handle, a metal bar, fits. When the handle is turned clockwise, the screw moves further out of the base, lifting the load resting on the load table. In order to support large load forces, the screw is usually formed with Acme threads.
Advantages
An advantage of jackscrews over some other types of jack is that they are self-locking, which means when the rotational force on the screw is removed, it will remain motionless where it was left and will not rotate backwards, regardless of how much load it is supporting. This makes them inherently safer than hydraulic jacks, for example, which will move backwards under load if the force on the hydraulic actuator is accidentally released.
Mechanical advantage
The ideal mechanical advantage of a screw jack, the ratio of the force the jack exerts on the load to the input force on the lever ignoring friction is
where
is the force the jack exerts on the load.
is the rotational force exerted on the handle of the jack
is the length of the jack handle, from the screw axis to where the force is applied
is the lead of the screw.
The screw jack consists of two simple machines in series; the long operating handle serves as a lever whose output force turns the screw. So the mechanical advantage is increased by a longer handle as well as a finer screw thread. However, most screw jacks have large amounts of friction which increase the input force necessary, so th
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A screwdriver is a tool, manual or powered, used for turning screws.
Description
A typical simple screwdriver has a handle and a shaft, ending in a tip the user puts into the screw head before turning the handle. This form of the screwdriver has been replaced in many workplaces and homes with a more modern and versatile tool, a power drill, as they are quicker, easier, and can also drill holes. The shaft is usually made of tough steel to resist bending or twisting. The tip may be hardened to resist wear, treated with a dark tip coating for improved visual contrast between tip and screw—or ridged or treated for additional "grip".
Handles are typically wood, metal, or plastic and usually hexagonal, square, or oval in cross-section to improve grip and prevent the tool from rolling when set down. Some manual screwdrivers have interchangeable tips that fit into a socket on the end of the shaft and are held in mechanically or magnetically. These often have a hollow handle that contains various types and sizes of tips, and a reversible ratchet action that allows multiple full turns without repositioning the tip or the user's hand.
A screwdriver is classified by its tip, which is shaped to fit the driving surfaces (slots, grooves, recesses, etc.) on the corresponding screw head. Proper use requires that the screwdriver's tip engage the head of a screw of the same size and type designation as the screwdriver tip. Screwdriver tips are available in a wide variety of types and sizes (List of screw drives). The two most common are the simple 'blade'-type for slotted screws, and Phillips, generically called "cross-recess", "cross-head", or "cross-point".
A wide variety of power screwdrivers ranges from a simple "stick"-type with batteries, a motor, and a tip holder all inline, to powerful "pistol" type VSR (variable-speed reversible) cordless drills that also function as screwdrivers. This is particularly useful as drilling a pilot hole before driving a screw is a common o
Document 3:::
Mechanical engineering is a discipline centered around the concept of using force multipliers, moving components, and machines. It utilizes knowledge of mathematics, physics, materials sciences, and engineering technologies. It is one of the oldest and broadest of the engineering disciplines.
Dawn of civilization to early middle ages
Engineering arose in early civilization as a general discipline for the creation of large scale structures such as irrigation, architecture, and military projects. Advances in food production through irrigation allowed a portion of the population to become specialists in Ancient Babylon.
All six of the classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC, and then in ancient Egyptian technology circa 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991-1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza.
The Assyrians were notable in their use of metallurgy and incorporation of iron weapons. Many of their advancements were in military equipment. They were not the first to develop them, but did make advancements on the wheel and the chariot. They made use of pivot-able axl
Document 4:::
Machine element or hardware refers to an elementary component of a machine. These elements consist of three basic types:
structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants,
mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and
control components such as buttons, switches, indicators, sensors, actuators and computer controllers.
While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Machine elements are basic mechanical parts and features used as the building blocks of most machines. Most are standardized to common sizes, but customs are also common for specialized applications.
Machine elements may be features of a part (such as screw threads or integral plain bearings) or they may be discrete parts in and of themselves such as wheels, axles, pulleys, rolling-element bearings, or gears. All of the simple machines may be described as machine elements, and many machine elements incorporate concepts of one or more simple machines. For example, a leadscrew incorporates a screw thread, which is an inclined plane wrapped around a cylinder.
Many mechanical design, invention, and engineering tasks involve a knowledge of various machine elements and an intelligent and creative combining of these elements into a component or assembly that fills a need (serves an application).
Structural elements
Beams,
Struts,
Bearings,
Fasteners
Keys,
Splines,
Cotter pin,
Seals
Machine guardings
Mechanical elements
Engine,
Electric motor,
Actuator,
Shafts,
Couplings
Belt,
Chain,
Cable drives,
Gear train,
Clutch,
Brake,
Flywheel,
Cam,
follower systems,
Linkage,
Simple machine
Types
Shafts
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Screws move objects to a higher elevation by increasing what?
A. velocity
B. kinetic energy
C. force applied
D. torque
Answer:
|
|
sciq-2792
|
multiple_choice
|
Atoms cannot be subdivided, created, or what?
|
[
"contaminated",
"observed",
"destroyed",
"contained"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Atoms cannot be subdivided, created, or what?
A. contaminated
B. observed
C. destroyed
D. contained
Answer:
|
|
scienceQA-9135
|
multiple_choice
|
How long is a caterpillar?
|
[
"24 millimeters",
"24 kilometers",
"24 meters",
"24 centimeters"
] |
A
|
The best estimate for the length of a caterpillar is 24 millimeters.
24 centimeters, 24 meters, and 24 kilometers are all too long.
|
Relavent Documents:
Document 0:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 1:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered.
How it works
CAT successively selects questions for the purpose of maximizing the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, they will then be presented with a more difficult question. Or, if they performed poorly, they would be presented with a simpler question. Compared to static tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores.
The basic computer-adaptive testing method is an iterative algorithm with the following steps:
The pool of available items is searched for the optimal item, based on the current estimate of the examinee's ability
The chosen item is presented to the examinee, who then answers it correctly or incorrectly
The ability estimate is updated, based on all prior answers
Steps 1–3 are repeated until a termination criterion is met
Nothing is known about the examinee prior to the administration of the first item, so the algorithm is generally started by selecting an item of medium, or medium-easy, difficulty as the first item.
As a result of adaptive administration, different examinees receive quite different tests. Although examinees are typically administered different tests, their ability scores are comparable to one another (i.e., as if they had received the same test, as is common
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a caterpillar?
A. 24 millimeters
B. 24 kilometers
C. 24 meters
D. 24 centimeters
Answer:
|
sciq-9292
|
multiple_choice
|
Muscarinic receptors can cause both depolarization or hyperpolarization depending on the what?
|
[
"subtype",
"gravity",
"phenotype",
"strain"
] |
A
|
Relavent Documents:
Document 0:::
The adequate stimulus is a property of a sensory receptor that determines the type of energy to which a sensory receptor responds with the initiation of sensory transduction. Sensory receptor are specialized to respond to certain types of stimuli. The adequate stimulus is the amount and type of energy required to stimulate a specific sensory organ.
Many of the sensory stimuli are categorized by the mechanics by which they are able to function and their purpose. Sensory receptors that are present within the body typically are made to respond to a single stimulus. Sensory receptors are present all throughout the body, and they take a certain amount of a stimulus to trigger these receptors. The use of these sensory receptors allows the brain to interpret the signals to the body which allow a person to respond to the stimulus if the stimulus reaches a minimum threshold to signal the brain. The sensory receptors will activate the sensory transduction system which will in turn send an electrical or chemical stimulus to a cell, and the cell will then respond with electrical signals to the brain which were produced from action potentials. The minuscule signals, which result from the stimuli, enter the cells must be amplified and turned into an sufficient signal that will be sent to the brain.
A sensory receptor's adequate stimulus is determined by the signal transduction mechanisms and ion channels incorporated in the sensory receptor's plasma membrane. Adequate stimulus are often used in relation with sensory thresholds and absolute thresholds to describe the smallest amount of a stimulus needed to activate a feeling within the sensory organ.
Categorizations of receptors
They are categorized through the stimuli to which they respond. Adequate stimulus are also often categorized based on their purpose and locations within the body. The following are the categorizations of receptors within the body:
Visual – These are found in the visual organs of species and are respon
Document 1:::
Cholinergic agents are compounds which mimic the action of acetylcholine and/or butyrylcholine. In general, the word "choline" describes the various quaternary ammonium salts containing the N,N,N-trimethylethanolammonium cation. Found in most animal tissues, choline is a primary component of the neurotransmitter acetylcholine and functions with inositol as a basic constituent of lecithin. Choline also prevents fat deposits in the liver and facilitates the movement of fats into cells.
The parasympathetic nervous system, which uses acetylcholine almost exclusively to send its messages, is said to be almost entirely cholinergic. Neuromuscular junctions, preganglionic neurons of the sympathetic nervous system, the basal forebrain, and brain stem complexes are also cholinergic, as are the receptor for the merocrine sweat glands.
In neuroscience and related fields, the term cholinergic is used in these related contexts:
A substance (or ligand) is cholinergic if it is capable of producing, altering, or releasing acetylcholine, or butyrylcholine ("indirect-acting"), or mimicking their behaviours at one or more of the body's acetylcholine receptor ("direct-acting") or butyrylcholine receptor types ("direct-acting"). Such mimics are called parasympathomimetic drugs or cholinomimetic drugs.
A receptor is cholinergic if it uses acetylcholine as its neurotransmitter.
A synapse is cholinergic if it uses acetylcholine as its neurotransmitter.
Cholinergic drug
Structure activity relationship for cholinergic drugs
A molecule must possess a nitrogen atom capable of bearing a positive charge, preferably a quaternary ammonium salt.
For maximum potency, the size of the alkyl groups substituted on the nitrogen should not exceed the size of a methyl group.
The molecule should have an oxygen atom, preferably an ester-like oxygen capable of participating in a hydrogen bond.
A two-carbon unit should occur between the oxygen atom and the nitrogen atom.
There must be two methyl
Document 2:::
Neuromodulation is the physiological process by which a given neuron uses one or more chemicals to regulate diverse populations of neurons. Neuromodulators typically bind to metabotropic, G-protein coupled receptors (GPCRs) to initiate a second messenger signaling cascade that induces a broad, long-lasting signal. This modulation can last for hundreds of milliseconds to several minutes. Some of the effects of neuromodulators include: altering intrinsic firing activity, increasing or decreasing voltage-dependent currents, altering synaptic efficacy, increasing bursting activity and reconfigurating synaptic connectivity.
Major neuromodulators in the central nervous system include: dopamine, serotonin, acetylcholine, histamine, norepinephrine, nitric oxide, and several neuropeptides. Cannabinoids can also be powerful CNS neuromodulators. Neuromodulators can be packaged into vesicles and released by neurons, secreted as hormones and delivered through the circulatory system. A neuromodulator can be conceptualized as a neurotransmitter that is not reabsorbed by the pre-synaptic neuron or broken down into a metabolite. Some neuromodulators end up spending a significant amount of time in the cerebrospinal fluid (CSF), influencing (or "modulating") the activity of several other neurons in the brain.
Neuromodulatory systems
The major neurotransmitter systems are the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system. Drugs targeting the neurotransmitter of such systems affect the whole system, which explains the mode of action of many drugs.
Most other neurotransmitters, on the other hand, e.g. glutamate, GABA and glycine, are used very generally throughout the central nervous system.
Noradrenaline system
The noradrenaline system consists of around 15,000 neurons, primarily in the locus coeruleus. This is diminutive compared to the more than 100 billion neurons in the brain. As with dopaminergic neurons in the
Document 3:::
An ionotropic effect is the effect of a transmitter substance or hormone that activates or deactivates ionotropic receptors (ligand-gated ion channels). The effect can be either positive or negative, specifically a depolarization or a hyperpolarization respectively. This term is commonly confused with an inotropic effect, which refers to a change in the force of contraction (e.g. in heart muscle) produced by transmitter substances or hormones.
Examples
This term could be used to describe the action of acetylcholine on nicotinic receptors, glutamate on NMDA receptors or GABA on GABAa receptors.
Document 4:::
A receptor modulator, or receptor ligand, is a general term for a substance, endogenous or exogenous, that binds to and regulates the activity of chemical receptors. They are ligands that can act on different parts of receptors and regulate activity in a positive, negative, or neutral direction with varying degrees of efficacy. Categories of these modulators include receptor agonists and receptor antagonists, as well as receptor partial agonists, inverse agonists, orthosteric modulators, and allosteric modulators, Examples of receptor modulators in modern medicine include CFTR modulators, selective androgen receptor modulators (SARMs), and muscarinic ACh receptor modulators.
Categorization and function
Currently, receptor modulators are categorized in the Agonist, Partial Agonist, Selective Tissue Modulators, Antagonist, and Inverse Agonist categories in terms of the effect they cause. They are further divided into Orthosteric or Allosteric Modulators according to how they effect said result. Typically, a chemical acts in an agonist fashion whenever it instigates or else facilitates a particular reaction by binding to a particular receptor. In contract, a chemical acts as an antagonist whenever binding to a particular receptor blocks or inhibits a particular response. Between these endpoints exists a gradient defined by a number of variables. One example is Selective Tissue Modulators, which mean a given ligand can behave differently according to the tissue type it is in. As for orthosteric and allosteric modulation, this describes the manner in which the ligand binds to the receptor in question: if it binds directly to the prescribed binding site of a receptor, the ligand is orthosteric in this instance; if the ligand alters the receptor by interacting with it at any place other than a binding site, allosteric interaction occurred. Note that a drug’s categorization does not dictate how another drug of the same family could be categorized or whether the same drug
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Muscarinic receptors can cause both depolarization or hyperpolarization depending on the what?
A. subtype
B. gravity
C. phenotype
D. strain
Answer:
|
|
sciq-6567
|
multiple_choice
|
What is a primary source of the hormone melatonin?
|
[
"thyroid gland",
"pituitary gland",
"thymus",
"pineal gland"
] |
D
|
Relavent Documents:
Document 0:::
The pineal gland (also known as the pineal body, conarium, or epiphysis cerebri) is a small endocrine gland in the brain of most vertebrates. The pineal gland produces melatonin, a serotonin-derived hormone which modulates sleep patterns in both circadian and seasonal cycles. The shape of the gland resembles a pine cone, which gives it its name. The pineal gland is located in the epithalamus, near the center of the brain, between the two hemispheres, tucked in a groove where the two halves of the thalamus join. It is one of the neuroendocrine secretory circumventricular organs in which capillaries are mostly permeable to solutes in the blood.
The pineal gland is present in almost all vertebrates, but is absent in protochordates in which there is a simple pineal homologue. The hagfish, considered as a primitive vertebrate, has a rudimentary structure regarded as the "pineal equivalent" in the dorsal diencephalon. In some species of amphibians and reptiles, the gland is linked to a light-sensing organ, variously called the parietal eye, the pineal eye or the third eye. Reconstruction of the biological evolution pattern suggests that the pineal gland was originally a kind of atrophied photoreceptor that developed into a neuroendocrine organ.
Ancient Greeks were the first to notice the pineal gland and believed it to be a valve, a guardian for the flow of pneuma. Galen in the 2nd century C.E. could not find any functional role and regarded the gland as a structural support for the brain tissue. He gave the name konario, meaning cone or pinecone, which during Renaissance was translated to Latin as pinealis. In the 17th century, René Descartes revived the mystical purpose and described the gland as the "principal seat of the soul". In the mid-20th century, the real biological role as a neuroendocrine organ was established.
Etymology
The word pineal, from Latin pinea (pine-cone), was first used in the late 17th century to refer to the cone shape of the brain gland.
Str
Document 1:::
Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle.
They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance.
Function
Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin.
Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition.
Additional images
Document 2:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 3:::
A melanotroph (or melanotrope) is a cell in the pituitary gland that generates melanocyte-stimulating hormone (α‐MSH) from its precursor pro-opiomelanocortin. Chronic stress can induce the secretion of α‐MSH in melanotrophs and lead to their subsequent degeneration.
See also
Chromophobe cell
Chromophil
Acidophil cell
Basophil cell
Oxyphil cell
Oxyphil cell (parathyroid)
Pituitary gland
Neuroendocrine cell
List of distinct cell types in the adult human body
Document 4:::
An anterior pituitary basophil is a type of cell in the anterior pituitary which manufactures hormones.
It is called a basophil because it is basophilic (readily takes up bases), and typically stains a relatively deep blue or purple.
These basophils are further classified by the hormones they produce. (It is usually not possible to distinguish between these cell types using standard staining techniques.)
*Produced only in pregnancy by the developing embryo.
See also
Chromophobe cell
Melanotroph
Chromophil
Acidophil cell
Oxyphil cell
Oxyphil cell (parathyroid)
Pituitary gland
Neuroendocrine cell
Basophilic
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a primary source of the hormone melatonin?
A. thyroid gland
B. pituitary gland
C. thymus
D. pineal gland
Answer:
|
|
ai2_arc-807
|
multiple_choice
|
In which way are rainforests and coral reefs different from ecosystems with few species?
|
[
"There are more prey than predators.",
"The food web is more stable and lasting.",
"Organisms often must compete for food.",
"Plant populations are the primary producers."
] |
B
|
Relavent Documents:
Document 0:::
Ecology: From Individuals to Ecosystems is a 2006 higher education textbook on general ecology written by Michael Begon, Colin R. Townsend and John L. Harper. Published by Blackwell Publishing, it is now in its fourth edition. The first three editions were published by Blackwell Science under the title Ecology: Individuals, Populations and Communities. Since it first became available it has had a positive reception, and has long been one of the leading textbooks on ecology.
Background and history
The book is written by Michael Begon of the University of Liverpool's School of Biosciences, Colin Townsend, from the Department of Zoology of New Zealand's University of Otago, and the University of Exeter's John L. Harper. The first edition was published in 1986. This was followed in 1990 with a second edition. The third edition became available in 1996. The most recent edition appeared in 2006 under the new subtitle From Individuals to Ecosystems.
One of the book's authors, John L. Harper, is now deceased. The fourth edition cover is an image of a mural on a Wellington street created by Christopher Meech and a group of urban artists to generate thought about the topic of environmental degradation. It reads "we did not inherit the earth from our ancestors, we borrowed it from our children."
Contents
Part 1. ORGANISMS
1. Organisms in their environments: the evolutionary backdrop
2. Conditions
3. Resources
4. Life, death and life histories
5. Intraspecific competition
6. Dispersal, dormancy and metapopulations
7. Ecological applications at the level of organisms and single-species populations
Part 2. SPECIES INTERACTIONS
8. Interspecific competition
9. The nature of predation
10. The population dynamics of predation
11. Decomposers and detritivores
12. Parasitism and disease
13. Symbiosis and mutualism
14. Abundance
15. Ecological applications at the level of population interactions
Part 3. COMMUNITIES AND ECOSYSTEMS
16. The nature of the community
17.
Document 1:::
John Harry Vandermeer (born 1940) is an American ecologist, a mathematical ecologist, tropical ecologist and agroecologist. He is the Asa Gray Distinguished University Professor of Ecology and Evolutionary Biology and the Arthur F. Thurnau Professor at the University of Michigan, where he has taught since 1971. His research focuses on the ecology of agricultural systems, and he has operated a plot of coffee plants in Mexico for his research for more than fifteen years. In 2016, the symposium "Science with Passion and a Moral Compass" was held to honor his career as a scientist and activist. The symposium, also known as VandyFest, was held in Ann Arbor, Michigan from May 6 to May 8.
Early life and education
Vandermeer was born in 1940 in Chicago, Illinois. He was educated at the University of Illinois, the University of Kansas, and the University of Michigan.
Vandermeer has conducted field research mainly in Mexico, Puerto Rico, Costa Rica, Nicaragua and Guatemala. His research has focused on the dynamics of spatially explicit biological interactions in coffee farms in Mexico.
His long-term collaboration with a multi-national team of scientists focused on tropical rainforest dynamics after major hurricane disturbance in Nicaragua. Their research provides strong evidence in favor of the assertion that it is the chance to reach a recruitment space into the forest canopy that governs the maintenance of hundreds of tree species and to some lesser extent the multiple tree species competition for nutrients and light. This diverges from tropical tree species niche identity notion thus proposing that the tree species assemblage are to some extent the result of random dispersal and recruitment events.
Vandermeer and his colleagues Dr. Ivette Perfecto, Dr. Douglas Boucher and Dr. Inigo Granzow de la Cerda contributed to the groundwork that evolved into the university system in the Autonomous Regions of the Atlantic Coast of Nicaragua.
Document 2:::
Molecular ecology is a field of evolutionary biology that is concerned with applying molecular population genetics, molecular phylogenetics, and more recently genomics to traditional ecological questions (e.g., species diagnosis, conservation and assessment of biodiversity, species-area relationships, and many questions in behavioral ecology). It is virtually synonymous with the field of "Ecological Genetics" as pioneered by Theodosius Dobzhansky, E. B. Ford, Godfrey M. Hewitt, and others. These fields are united in their attempt to study genetic-based questions "out in the field" as opposed to the laboratory. Molecular ecology is related to the field of conservation genetics.
Methods frequently include using microsatellites to determine gene flow and hybridization between populations. The development of molecular ecology is also closely related to the use of DNA microarrays, which allows for the simultaneous analysis of the expression of thousands of different genes. Quantitative PCR may also be used to analyze gene expression as a result of changes in environmental conditions or different responses by differently adapted individuals.
Molecular ecology uses molecular genetic data to answer ecological question related to biogeography, genomics, conservation genetics, and behavioral ecology. Studies mostly use data based on deoxyribonucleic acid sequences (DNA). This approach has been enhanced over a number of years to allow researchers to sequence thousands of genes from a small amount of starting DNA. Allele sizes are another way researchers are able to compare individuals and populations which allows them to quantify the genetic diversity within a population and the genetic similarities among populations.
Bacterial diversity
Molecular ecological techniques are used to study in situ questions of bacterial diversity. Many microorganisms are not easily obtainable as cultured strains in the laboratory, which would allow for identification and characterization. I
Document 3:::
In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate.
The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo
Document 4:::
Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment.
Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity.
Impact
Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result.
Examples
Some examples of ecosystems that are rich in diversity are:
Deserts
Forests
Large marine ecosystems
Marine ecosystems
Old-growth forests
Rainforests
Tundra
Coral reefs
Marine
Ecosystem diversity as a result of evolutionary pressure
Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In which way are rainforests and coral reefs different from ecosystems with few species?
A. There are more prey than predators.
B. The food web is more stable and lasting.
C. Organisms often must compete for food.
D. Plant populations are the primary producers.
Answer:
|
|
sciq-1250
|
multiple_choice
|
During cellular respiration, carbons from the glucose molecule are changed back into what gas?
|
[
"carbon monoxide",
"carbon dioxide",
"deformation dioxide",
"liquid dioxide"
] |
B
|
Relavent Documents:
Document 0:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 1:::
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration.
Each pathway generates different waste products.
Aerobic respiration
When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules.
Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP
Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen.
In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant.
Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle.
The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted.
Anaerobic respiration
Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that
Document 2:::
Digestion is the breakdown of carbohydrates to yield an energy-rich compound called ATP. The production of ATP is achieved through the oxidation of glucose molecules. In oxidation, the electrons are stripped from a glucose molecule to reduce NAD+ and FAD. NAD+ and FAD possess a high energy potential to drive the production of ATP in the electron transport chain. ATP production occurs in the mitochondria of the cell. There are two methods of producing ATP: aerobic and anaerobic.
In aerobic respiration, oxygen is required. Using oxygen increases ATP production from 4 ATP molecules to about 30 ATP molecules.
In anaerobic respiration, oxygen is not required. When oxygen is absent, the generation of ATP continues through fermentation. There are two types of fermentation: alcohol fermentation and lactic acid fermentation.
There are several different types of carbohydrates: polysaccharides (e.g., starch, amylopectin, glycogen, cellulose), monosaccharides (e.g., glucose, galactose, fructose, ribose) and the disaccharides (e.g., sucrose, maltose, lactose).
Glucose reacts with oxygen in the following reaction, C6H12O6 + 6O2 → 6CO2 + 6H2O. Carbon dioxide and water are waste products, and the overall reaction is exothermic.
The reaction of glucose with oxygen releasing energy in the form of molecules of ATP is therefore one of the most important biochemical pathways found in living organisms.
Glycolysis
Glycolysis, which means “sugar splitting,” is the initial process in the cellular respiration pathway. Glycolysis can be either an aerobic or anaerobic process. When oxygen is present, glycolysis continues along the aerobic respiration pathway. If oxygen is not present, then ATP production is restricted to anaerobic respiration. The location where glycolysis, aerobic or anaerobic, occurs is in the cytosol of the cell. In glycolysis, a six-carbon glucose molecule is split into two three-carbon molecules called pyruvate. These carbon molecules are oxidized into NADH and AT
Document 3:::
Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce.
Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising.
Biochemical process of fermentation of sucrose
The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process.
C6H12O6 → 2 C2H5OH + 2 CO2
Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules.
C12H22O11 + H2O + invertase → 2 C6H12O6
Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation:
C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+
CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis:
1. CH3COCOO− + H+ → CH3CHO + CO2
catalyzed by pyruvate decarboxylase
2. CH3CHO + NADH + H+ → C2H5OH + NAD+
This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast).
Document 4:::
Catabolism () is the set of metabolic pathways that breaks down molecules into smaller units that are either oxidized to release energy or used in other anabolic reactions. Catabolism breaks down large molecules (such as polysaccharides, lipids, nucleic acids, and proteins) into smaller units (such as monosaccharides, fatty acids, nucleotides, and amino acids, respectively). Catabolism is the breaking-down aspect of metabolism, whereas anabolism is the building-up aspect.
Cells use the monomers released from breaking down polymers to either construct new polymer molecules or degrade the monomers further to simple waste products, releasing energy. Cellular wastes include lactic acid, acetic acid, carbon dioxide, ammonia, and urea. The formation of these wastes is usually an oxidation process involving a release of chemical free energy, some of which is lost as heat, but the rest of which is used to drive the synthesis of adenosine triphosphate (ATP). This molecule acts as a way for the cell to transfer the energy released by catabolism to the energy-requiring reactions that make up anabolism.
Catabolism is a destructive metabolism and anabolism is a constructive metabolism. Catabolism, therefore, provides the chemical energy necessary for the maintenance and growth of cells. Examples of catabolic processes include glycolysis, the citric acid cycle, the breakdown of muscle protein in order to use amino acids as substrates for gluconeogenesis, the breakdown of fat in adipose tissue to fatty acids, and oxidative deamination of neurotransmitters by monoamine oxidase.
Catabolic hormones
There are many signals that control catabolism. Most of the known signals are hormones and the molecules involved in metabolism itself. Endocrinologists have traditionally classified many of the hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The so-called classic catabolic hormones known since the early 20th century are cortisol, glucagon, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
During cellular respiration, carbons from the glucose molecule are changed back into what gas?
A. carbon monoxide
B. carbon dioxide
C. deformation dioxide
D. liquid dioxide
Answer:
|
|
sciq-9767
|
multiple_choice
|
X rays and what other type of electromagnetic wave has the shortest wavelengths and the highest frequencies?
|
[
"plasma rays",
"ultraviolet rays",
"gamma rays",
"X-rays"
] |
C
|
Relavent Documents:
Document 0:::
Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the ITU-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 µm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either.
At some frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air is attenuated to zero within a few meters, so it is not practical for terrestrial radio communication at such frequencies. However, there are frequency windows in Earth's atmosphere, where the terahertz radiation could propagate up to 1 km or even longer depending on atmospheric conditions. The most important is the 0.3 THz band that will be used for 6G communications. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects.
Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the “terahertz gap”; it is called a “gap” because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be pos
Document 1:::
The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters.
The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere.
See also
Electronic filter — examples of transmission characteristics of electronic filters
Document 2:::
In the physical sciences, the term spectrum was introduced first into optics by Isaac Newton in the 17th century, referring to the range of colors observed when white light was dispersed through a prism.
Soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a spectral density plot.
Later it expanded to apply to other waves, such as sound waves and sea waves that could also be measured as a function of frequency (e.g., noise spectrum, sea wave spectrum). It has also been expanded to more abstract "signals", whose power spectrum can be analyzed and processed. The term now applies to any signal that can be measured or decomposed along a continuous variable, such as energy in electron spectroscopy or mass-to-charge ratio in mass spectrometry. Spectrum is also used to refer to a graphical representation of the signal as a function of the dependent variable.
Etymology
Electromagnetic spectrum
Electromagnetic spectrum refers to the full range of all frequencies of electromagnetic radiation and also to the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. Devices used to measure an electromagnetic spectrum are called spectrograph or spectrometer. The visible spectrum is the part of the electromagnetic spectrum that can be seen by the human eye. The wavelength of visible light ranges from 390 to 700 nm. The absorption spectrum of a chemical element or chemical compound is the spectrum of frequencies or wavelengths of incident radiation that are absorbed by the compound due to electron transitions from a lower to a higher energy state. The emission spectrum refers to the spectrum of radiation emitted by the compound due to electron transitions from a higher to a lower energy state.
Light from many different sources contains various colors, each with its own brightness or intensity. A rainbow, or prism, sends these component colors in different direction
Document 3:::
In infrared astronomy, the L band is an atmospheric transmission window centred on 3.5 micrometres (in the mid-infrared).
Electromagnetic spectrum
Infrared imaging
Document 4:::
The IEEE Heinrich Hertz Medal was a science award presented by the IEEE for outstanding achievements in the field of electromagnetic waves. The medal was named in honour of German physicist Heinrich Hertz, and was first proposed in 1986 by IEEE Region 8 (Germany) as a centennial recognition of Hertz's work on electromagnetic radiation theory from 1886 to 1891. The medal was first awarded in 1988, and was presented annually until 2001. It was officially discontinued in November 2009.
Recipients
1988: Hans-Georg Unger (Technical University at Brunswick, Germany) for outstanding merits in radio-frequency science, particularly the theory of dielectric wave guides and their application in modern wide-band communication.
1989: Nathan Marcuvitz (Polytechnic University of New York, United States) for fundamental theoretical and experimental contributions to the engineering formulation of electromagnetic field theory.
1990: John D. Kraus (Ohio State University, United States) for pioneering work in radio astronomy and the development of the helical antenna and the corner reflector antenna.
1991: Leopold B. Felsen (Polytechnic University of New York, United States) for highly original and significant developments in the theories of propagation, diffraction and dispersion of electromagnetic waves.
1992: James R. Wait (University of Arizona, United States) for fundamental contributions to electromagnetic theory, to the study of propagation of Hertzian waves through the atmosphere, ionosphere and the Earth, and to their applications in communications, navigation and geophysical exploration.
1993: Kenneth Budden (Cavendish Laboratory, University of Cambridge, United Kingdom) for major original contributions to the theory of electromagnetic waves in ionized media with applications to terrestrial and space communications.
1994: Ronald N. Bracewell (Stanford University, United States) for pioneering work in antenna aperture synthesis and image reconstruction as applied to radioast
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
X rays and what other type of electromagnetic wave has the shortest wavelengths and the highest frequencies?
A. plasma rays
B. ultraviolet rays
C. gamma rays
D. X-rays
Answer:
|
|
ai2_arc-978
|
multiple_choice
|
A scientist working on a new package design wants to use a material that is highly recyclable, biodegradable, and inexpensive. The best material for the package design is
|
[
"aluminum.",
"cardboard.",
"plastic.",
"glass."
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A scientist working on a new package design wants to use a material that is highly recyclable, biodegradable, and inexpensive. The best material for the package design is
A. aluminum.
B. cardboard.
C. plastic.
D. glass.
Answer:
|
|
scienceQA-1840
|
multiple_choice
|
Select the fish below.
|
[
"American bullfrog",
"whale shark",
"sea turtle",
"yak"
] |
B
|
An American bullfrog is an amphibian. It has moist skin and begins its life in water.
Frogs live near water or in damp places. Most frogs lay their eggs in water.
A yak is a mammal. It has hair and feeds its young milk.
Yaks live in cold places. Their long hair helps keep them warm.
A sea turtle is a reptile. It has scaly, waterproof skin.
Sea turtles live in the water, but they lay their eggs on land.
A whale shark is a fish. It lives underwater. It has fins, not limbs.
Whale sharks are the largest fish in the world! Adult whale sharks can weigh over 21 tons—as much as seven elephants!
|
Relavent Documents:
Document 0:::
Fish intelligence is the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" as it applies to fish.
According to Culum Brown from Macquarie University, "Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ‘higher’ vertebrates including non-human primates."
Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish has the smallest ratio of all known vertebrates. At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans).
Brain
Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials.
The cerebellum of cartilaginous and bony fishes is large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to
Document 1:::
The Digital Fish Library (DFL) is a University of California San Diego project funded by the Biological Infrastructure Initiative (DBI) of the National Science Foundation (NSF). The DFL creates 2D and 3D visualizations of the internal and external anatomy of fish obtained with magnetic resonance imaging (MRI) methods and makes these publicly available on the web.
The information core for the Digital Fish Library is generated using high-resolution MRI scanners housed at the Center for functional magnetic resonance imaging (CfMRI) multi-user facility at UC San Diego. These instruments use magnetic fields to take 3D images of animal tissues, allowing researchers to non-invasively see inside them and quantitatively describe their 3D anatomy. Fish specimens are obtained from the Marine Vertebrate Collection at Scripps Institute of Oceanography (SIO) and imaged by staff from UC San Diego's Center for Scientific Computation in Imaging (CSCI).
As of February 2010, the Digital Fish Library contains almost 300 species covering all five classes of fish, 56 of 60 orders, and close to 200 of the 521 fish families as described by Nelson, 2006. DFL imaging has also contributed to a number of published peer-reviewed scientific studies.
Digital Fish Library work has been featured in the media, including two National Geographic documentaries: Magnetic Navigator and Ultimate Shark.
Document 2:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 3:::
Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity.
Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others.
Fisheries research
Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a
Document 4:::
Edward Brinton (January 12, 1924 – January 13, 2010) was a professor of oceanography and research biologist. His particular area of expertise was Euphausiids or krill, small shrimp-like creatures found in all the oceans of the world.
Early life
Brinton was born on January 12, 1924, in Richmond, Indiana to a Quaker couple, Howard Brinton and Anna Shipley Cox Brinton. Much of his childhood was spent on the grounds of Mills College where his mother was Dean of Faculty and his father was a professor. The family later moved to the Pendle Hill Quaker Center for Study and Contemplation, in Pennsylvania where his father and mother became directors.
Academic career
Brinton attended High School at Westtown School in Chester County, Pennsylvania. He studied at Haverford College and graduated in 1949 with a bachelor's degree in biology. He enrolled at Scripps Institution of Oceanography as a graduate student in 1950 and was awarded a Ph.D. in 1957. He continued on as a research biologist in the Marine Life Research Group, part of the CalCOFI program. He soon turned his dissertation into a major publication, The Distribution of Pacific Euphausiids. In this large monograph, he laid out the major biogeographic provinces of the Pacific (and part of the Atlantic), large-scale patterns of pelagic diversity and one of the most rational hypotheses for the mechanism of sympatric, oceanic speciation. In all of these studies the role of physical oceanography and circulation played a prominent part. His work has since been validated by others and continues, to this day, to form the basis for our attempts to understand large-scale pelagic ecology and the role of physics of the movement of water in the regulation of pelagic ecosystems. In addition to these studies he has led in the studies of how climatic variations have led to the large variations in the California Current, and its populations and communities. He has described several new species and, in collaboration with Margaret K
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the fish below.
A. American bullfrog
B. whale shark
C. sea turtle
D. yak
Answer:
|
ai2_arc-548
|
multiple_choice
|
What is the primary reason for providing detailed, accurate records from scientific investigations?
|
[
"to make reports longer",
"so results can be published",
"to demonstrate professionalism",
"so experiments can be replicated"
] |
D
|
Relavent Documents:
Document 0:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
A scholar is a person who is a researcher or has expertise in an academic discipline. A scholar can also be an academic, who works as a professor, teacher, or researcher at a university. An academic usually holds an advanced degree or a terminal degree, such as a master's degree or a doctorate (PhD). Independent scholars and public intellectuals work outside of the academy yet may publish in academic journals and participate in scholarly public discussion.
Definitions
In contemporary English usage, the term scholar sometimes is equivalent to the term academic, and describes a university-educated individual who has achieved intellectual mastery of an academic discipline, as instructor and as researcher. Moreover, before the establishment of universities, the term scholar identified and described an intellectual person whose primary occupation was professional research. In 1847, minister Emanuel Vogel Gerhart spoke of the role of the scholar in society:
Gerhart argued that a scholar can not be focused on a single discipline, contending that knowledge of multiple disciplines is necessary to put each into context and to inform the development of each:
A 2011 examination outlined the following attributes commonly accorded to scholars as "described by many writers, with some slight variations in the definition":
Scholars may rely on the scholarly method or scholarship, a body of principles and practices used by scholars to make their claims about the world as valid and trustworthy as possible, and to make them known to the scholarly public. It is the methods that systemically advance the teaching, research, and practice of a given scholarly or academic field of study through rigorous inquiry. Scholarship is creative, can be documented, can be replicated or elaborated, and can be and is peer-reviewed through various methods.
Role in society
Scholars have generally been upheld as creditable figures of high social standing, who are engaged in work important to society.
Document 4:::
LabTV is an online hub where people, labs, and organizations engaged in medical research come together to tell their stories. LabTV has filmed hundreds of medical researchers at dozens of institutions across the United States, including dozens at the National Institutes of Health.
Brief History
LabTV is a private company that was founded in 2013 by entrepreneur Jay Walker as a way to help get more students to consider a career in medical research. In 2014, Mr. Walker and LabTV’s executive producer David Hoffman received Disruptor Innovation Awards at the 2014 Tribeca Film Festival for LabTV’s work in getting university students around the country to create short personal interviews of National Institutes of Health-funded medical researchers.
Winners of the LabTV contest included student filmmakers from Columbia University, the University of Delaware, Cornell University, University of Hawaii, University of Pennsylvania, Tufts University, George Washington University, the University of Virginia, The University of Chicago, and the University of Georgia among others. LabTV continues to film medical researchers at dozens of universities and organizations, including the National Institutes of Health and Georgetown University
See also
National Institutes of Health
Medical research
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary reason for providing detailed, accurate records from scientific investigations?
A. to make reports longer
B. so results can be published
C. to demonstrate professionalism
D. so experiments can be replicated
Answer:
|
|
ai2_arc-758
|
multiple_choice
|
Due to the increasing average temperature of the atmosphere, polar ice sheets melt at a greater rate than they form. Which of these will be an effect of the continued melting of polar ice?
|
[
"A major reservoir of fresh water will decrease.",
"Plant life will increase due to higher sea levels.",
"Water runoff will cause an increase in ocean salinity.",
"Ocean temperature will decrease with the addition of cold water."
] |
A
|
Relavent Documents:
Document 0:::
Between 1901 and 2018, the average global sea level rose by , or an average of 1–2 mm per year. This rate accelerated to 4.62 mm/yr for the decade 2013–2022. Climate change due to human activities is the main cause. Between 1993 and 2018, thermal expansion of water accounted for 42% of sea level rise. Melting temperate glaciers accounted for 21%, with Greenland accounting for 15% and Antarctica 8%. Sea level rise lags changes in the Earth's temperature. So sea level rise will continue to accelerate between now and 2050 in response to warming that is already happening. What happens after that will depend on what happens with human greenhouse gas emissions. Sea level rise may slow down between 2050 and 2100 if there are deep cuts in emissions. It could then reach a little over from now by 2100. With high emissions it may accelerate. It could rise by or even by then. In the long run, sea level rise would amount to over the next 2000 years if warming amounts to . It would be if warming peaks at .
Rising seas ultimately impact every coastal and island population on Earth. This can be through flooding, higher storm surges, king tides, and tsunamis. These have many knock-on effects. They lead to loss of coastal ecosystems like mangroves. Crop production falls because of salinization of irrigation water and damage to ports disrupts sea trade. The sea level rise projected by 2050 will expose places currently inhabited by tens of millions of people to annual flooding. Without a sharp reduction in greenhouse gas emissions, this may increase to hundreds of millions in the latter decades of the century. Areas not directly exposed to rising sea levels could be affected by large scale migrations and economic disruption.
At the same time, local factors like tidal range or land subsidence, as well as the varying resilience and adaptive capacity of individual ecosystems, sectors, and countries will greatly affect the severity of impacts. For instance, sea level rise along the
Document 1:::
Measurement of sea ice is important for safety of navigation and for monitoring the environment, particularly the climate. Sea ice extent interacts with large climate patterns such as the North Atlantic oscillation and Atlantic Multidecadal Oscillation, to name just two, and influences climate in the rest of the globe.
The amount of sea ice coverage in the arctic has been of interest for centuries, as the Northwest Passage was of high interest for trade and seafaring. There is a longstanding history of records and measurements of some effects of the sea ice extent, but comprehensive measurements were sparse till the 1950s and started with the satellite era in the late 1970s. Modern direct records include data about ice extent, ice area, concentration, thickness, and the age of the ice. The current trends in the records show a significant decline in Northern hemisphere sea ice and a small but statistically significant increase in the winter Southern hemisphere sea ice.
Furthermore, current research comprises and establishes extensive sets of multi-century historical records of arctic and subarctic sea ice and uses, among others high-resolution paleo-proxy sea-ice records. The arctic sea ice is a dynamic climate-system component and is linked to the Atlantic multidecadal variability and the historical climate over various decades. There are circular changes of sea ice patterns but so far no clear patterns based on modeling predictions.
Methods of measuring sea ice
Early observations
Records assembled by Vikings showing the number of weeks per year that ice occurred along the north coast of Iceland date back to A.D. 870, but a more complete record exists since 1600. More extensive written records of Arctic sea ice date back to the mid-18th century. The earliest of those records relate to Northern Hemisphere shipping lanes, but records from that period are sparse. Air temperature records dating back to the 1880s can serve as a stand-in (proxy) for Arctic sea ice,
Document 2:::
Melt ponds are pools of open water that form on sea ice in the warmer months of spring and summer. The ponds are also found on glacial ice and ice shelves. Ponds of melted water can also develop under the ice, which may lead to the formation of thin underwater ice layers called false bottoms.
Melt ponds are usually darker than the surrounding ice, and their distribution and size is highly variable. They absorb solar radiation rather than reflecting it as ice does and, thereby, have a significant influence on Earth's radiation balance. This differential, which had not been scientifically investigated until recently, has a large effect on the rate of ice melting and the extent of ice cover.
Melt ponds can melt through to the ocean's surface. Seawater entering the pond increases the melt rate because the salty water of the ocean is warmer than the fresh water of the pond. The increase in salinity also depresses the water's freezing point.
Water from melt ponds over land surface can run into crevasses or moulins – tubes leading under ice sheets or glaciers – turning into meltwater. The water may reach the underlying rock. The effect is an increase in the rate of ice flow to the oceans, as the fluid behaves like a lubricant in the basal sliding of glaciers.
Effects of melt ponds
The effects of melt ponds are diverse (this subsection refers to melt ponds on ice sheets and ice shelves). Research by Ted Scambos, of the National Snow and Ice Data Center, has supported the melt water fracturing theory that suggests the melting process associated with melt ponds has a substantial effect on ice shelf disintegration.
Seasonal melt ponded and penetrating under glaciers shows seasonal acceleration and deceleration of ice flows affecting whole icesheets. Accumulated changes by ponding on ice sheets appear in the earthquake record of Greenland and other glaciers:
"Quakes ranged from six to 15 per year from 1993 to 2002, then jumped to 20 in 2003, 23 in 2004, and 32 in th
Document 3:::
The International Arctic Research Center, or IARC, established in 1999, is a research institution focused on integrating and coordinating study of Climate change in the Arctic. The primary partners in IARC are Japan and the United States. Participants include organizations from Canada, China, Denmark, Germany, Japan, Norway, Russia, the United Kingdom, and the United States.
Overview
The Center is located at the University of Alaska Fairbanks, in the Syun-Ichi Akasofu Building. The Keith B. Mather Library is the science library housed in the Akasofu Building, serving IARC and the Geophysical Institute of UAF. The building also houses the UAF atmospheric sciences department, the Center for Global Change and the Fairbanks forecast office of the National Weather Service.
Study projects are focused within four major themes:
Arctic ocean models and observation
Arctic atmosphere: feedbacks, radiation, and weather analysis
Permafrost/Frozen soil models and observations
Arctic biota/vegetation (ecosystem models)
IARC is devoting specific effort to answering the following three questions:
To what extent is climate change due to natural vs man-made causes?
What parameters, processes and interactions are needed to understand and predict future climate change?
What are the likely impacts of climate change?
Document 4:::
In earth science, global surface temperature (GST; sometimes referred to as global mean surface temperature, GMST, or global average surface temperature) is calculated by averaging the temperatures over sea and land. Periods of global cooling and global warming have alternated throughout Earth's history.
Series of reliable global temperature measurements began in the 1850—1880 time frame. Through 1940, the average annual temperature increased, but was relatively stable between 1940 and 1975. Since 1975, it has increased by roughly 0.15 °C to 0.20 °C per decade, to at least 1.1 °C (1.9 °F) above 1880 levels. The current annual GMST is about , though monthly temperatures can vary almost above or below this figure.
Sea levels have risen and fallen sharply during Earth's 4.6 billion year history. However, recent global sea level rise, driven by increasing global surface temperatures, has increased over the average rate of the past two to three thousand years. The continuation or acceleration of this trend will cause significant changes in the world's coastlines.
Background
In the 1860s, physicist John Tyndall recognized the Earth's natural greenhouse effect and suggested that slight changes in the atmospheric composition could bring about climatic variations. In 1896, a seminal paper by Swedish scientist Svante Arrhenius first predicted that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature through the greenhouse effect.
Changes in global temperatures over the past century provide evidence for the effects of increasing greenhouse gasses. When the climate system reacts to such changes, climate change follows. Measurement of the GST(global surface temperature) is one of the many lines of evidence supporting the scientific consensus on climate change, which is that humans are causing warming of Earth's climate system.
Warming oceans
With the Earth's temperature increasing, the ocean has absorbed much of th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Due to the increasing average temperature of the atmosphere, polar ice sheets melt at a greater rate than they form. Which of these will be an effect of the continued melting of polar ice?
A. A major reservoir of fresh water will decrease.
B. Plant life will increase due to higher sea levels.
C. Water runoff will cause an increase in ocean salinity.
D. Ocean temperature will decrease with the addition of cold water.
Answer:
|
|
sciq-10972
|
multiple_choice
|
Like earth, the moon has a distinct crust, mantle, and what else?
|
[
"temperature",
"polarity",
"core",
"atmosphere"
] |
C
|
Relavent Documents:
Document 0:::
Earth's crustal evolution involves the formation, destruction and renewal of the rocky outer shell at that planet's surface.
The variation in composition within the Earth's crust is much greater than that of other terrestrial planets. Mars, Venus, Mercury and other planetary bodies have relatively quasi-uniform crusts unlike that of the Earth which contains both oceanic and continental plates. This unique property reflects the complex series of crustal processes that have taken place throughout the planet's history, including the ongoing process of plate tectonics.
The proposed mechanisms regarding Earth's crustal evolution take a theory-orientated approach. Fragmentary geologic evidence and observations provide the basis for hypothetical solutions to problems relating to the early Earth system. Therefore, a combination of these theories creates both a framework of current understanding and also a platform for future study.
Early crust
Mechanisms of early crust formation
The early Earth was entirely molten. This was due to high temperatures created and maintained by the following processes:
Compression of the early atmosphere
Rapid axial rotation
Regular impacts with neighbouring planetesimals.
The mantle remained hotter than modern day temperatures throughout the Archean. Over time the Earth began to cool as planetary accretion slowed and heat stored within the magma ocean was lost to space through radiation.
A theory for the initiation of magma solidification states that once cool enough, the cooler base of the magma ocean would begin to crystallise first. This is because pressure of 25 GPa at the surface cause the solidus to lower. The formation of a thin 'chill-crust' at the extreme surface would provide thermal insulation to the shallow sub surface, keeping it warm enough to maintain the mechanism of crystallisation from the deep magma ocean.
The composition of the crystals produced during the crystallisation of the magma ocean varied with depth. Ex
Document 1:::
Lunar swirls are enigmatic features found across the Moon's surface, which are characterized by having a high albedo, appearing optically immature (i.e. having the optical characteristics of a relatively young regolith), and (often) having a sinuous shape. Their curvilinear shape is often accentuated by low albedo regions that wind between the bright swirls. They appear to overlay the lunar surface, superposed on craters and ejecta deposits, but impart no observable topography. Swirls have been identified on the lunar maria and on highlands - they are not associated with a specific lithologic composition. Swirls on the maria are characterized by strong albedo contrasts and complex, sinuous morphology, whereas those on highland terrain appear less prominent and exhibit simpler shapes, such as single loops or diffuse bright spots.
Association with magnetic anomalies
The lunar swirls are coincident with regions of the magnetic field of the Moon with relatively high strength on a planetary body that lacks, and may never have had, an active core dynamo with which to generate its own magnetic field. Every swirl has an associated magnetic anomaly, but not every magnetic anomaly has an identifiable swirl. Orbital magnetic field mapping by the Apollo 15 and 16 sub-satellites, Lunar Prospector, and Kaguya show regions with a local magnetic field. Because the Moon has no currently active global magnetic field, these regional anomalies are regions of remnant magnetism; their origin remains controversial.
Formation models
There are three leading models for swirl formation. Each model must address two characteristics of lunar swirls formation, namely that a swirl is optically immature, and that it is associated with magnetic anomaly.
Models for creation of the magnetic anomalies associated with lunar swirls point to the observation that several of the magnetic anomalies are antipodal to the younger, large impact basins on the Moon.
Cometary impact model
This theory argues tha
Document 2:::
The internal structure of Earth is the layers of the Earth, excluding its atmosphere and hydrosphere. The structure consists of an outer silicate solid crust, a highly viscous asthenosphere and solid mantle, a liquid outer core whose flow generates the Earth's magnetic field, and a solid inner core.
Scientific understanding of the internal structure of Earth is based on observations of topography and bathymetry, observations of rock in outcrop, samples brought to the surface from greater depths by volcanoes or volcanic activity, analysis of the seismic waves that pass through Earth, measurements of the gravitational and magnetic fields of Earth, and experiments with crystalline solids at pressures and temperatures characteristic of Earth's deep interior.
Global properties
"Note: In chondrite model (1), the light element in the core is assumed to be Si. Chondrite model (2) is a model of chemical composition of the mantle corresponding to the model of core shown in chondrite model (1)."Measurements of the force exerted by Earth's gravity can be used to calculate its mass. Astronomers can also calculate Earth's mass by observing the motion of orbiting satellites. Earth's average density can be determined through gravimetric experiments, which have historically involved pendulums. The mass of Earth is about . The average density of Earth is .
Layers
The structure of Earth can be defined in two ways: by mechanical properties such as rheology, or chemically. Mechanically, it can be divided into lithosphere, asthenosphere, mesospheric mantle, outer core, and the inner core. Chemically, Earth can be divided into the crust, upper mantle, lower mantle, outer core, and inner core. The geologic component layers of Earth are at increasing depths below the surface:
Crust and lithosphere
Earth's crust ranges from in depth and is the outermost layer. The thin parts are the oceanic crust, which underlie the ocean basins (5–10 km) and is mafic-rich (dense iron-magnesium silic
Document 3:::
In astronomy, astrophysics and geophysics, a mass concentration (or mascon) is a region of a planet's or moon's crust that contains a large positive gravity anomaly. In general, the word "mascon" can be used as a noun to refer to an excess distribution of mass on or beneath the surface of an astronomical body (compared to some suitable average), such as is found around Hawaii on Earth. However, this term is most often used to describe a geologic structure that has a positive gravitational anomaly associated with a feature (e.g. depressed basin) that might otherwise have been expected to have a negative anomaly, such as the "mascon basins" on the Moon.
Lunar and Martian mascons
The Moon is the most gravitationally "lumpy" major body known in the Solar System. Its largest mascons can cause a plumb bob to hang about a third of a degree off vertical, pointing toward the mascon, and increase the force of gravity by one-half percent.
Typical examples of mascon basins on the Moon are the Imbrium, Serenitatis, Crisium and Orientale impact basins, all of which exhibit significant topographic depressions and positive gravitational anomalies. Examples of mascon basins on Mars are the Argyre, Isidis, and Utopia basins. Theoretical considerations imply that a topographic low in isostatic equilibrium would exhibit a slight negative gravitational anomaly. Thus, the positive gravitational anomalies associated with these impact basins indicate that some form of positive density anomaly must exist within the crust or upper mantle that is currently supported by the lithosphere. One possibility is that these anomalies are due to dense mare basaltic lavas, which might reach up to 6 kilometers in thickness for the Moon. While these lavas certainly contribute to the observed gravitational anomalies, uplift of the crust-mantle interface is also required to account for their magnitude. Indeed, some mascon basins on the Moon do not appear to be associated with any signs of volcanic activit
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Like earth, the moon has a distinct crust, mantle, and what else?
A. temperature
B. polarity
C. core
D. atmosphere
Answer:
|
|
sciq-6280
|
multiple_choice
|
What type of learning is done from past experiences and reasoning?
|
[
"rational learning",
"consequence learning",
"insight learning",
"transformation learning"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of learning is done from past experiences and reasoning?
A. rational learning
B. consequence learning
C. insight learning
D. transformation learning
Answer:
|
|
sciq-8922
|
multiple_choice
|
Cody's velocity is zero so therefore he doesn't have what?
|
[
"momentum",
"temperature",
"weight",
"mass"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 2:::
Conceptual physics is an approach to teaching physics that focuses on the ideas of physics rather than the mathematics. It is believed that with a strong conceptual foundation in physics, students are better equipped to understand the equations and formulas of physics, and to make connections between the concepts of physics and their everyday life. Early versions used almost no equations or math-based problems.
Paul G. Hewitt popularized this approach with his textbook Conceptual Physics: A New Introduction to your Environment in 1971. In his review at the time, Kenneth W. Ford noted the emphasis on logical reasoning and said "Hewitt's excellent book can be called physics without equations, or physics without computation, but not physics without mathematics." Hewitt's wasn't the first book to take this approach. Conceptual Physics: Matter in Motion by Jae R. Ballif and William E. Dibble was published in 1969. But Hewitt's book became very successful. As of 2022, it is in its 13th edition. In 1987 Hewitt wrote a version for high school students.
The spread of the conceptual approach to teaching physics broadened the range of students taking physics in high school. Enrollment in conceptual physics courses in high school grew from 25,000 students in 1987 to over 400,000 in 2009. In 2009, 37% of students took high school physics, and 31% of them were in Physics First, conceptual physics courses, or regular physics courses using a conceptual textbook.
This approach to teaching physics has also inspired books for science literacy courses, such as From Atoms to Galaxies: A Conceptual Physics Approach to Scientific Awareness by Sadri Hassani.
Document 3:::
A velocity potential is a scalar potential used in potential flow theory. It was introduced by Joseph-Louis Lagrange in 1788.
It is used in continuum mechanics, when a continuum occupies a simply-connected region and is irrotational. In such a case,
where denotes the flow velocity. As a result, can be represented as the gradient of a scalar function :
is known as a velocity potential for .
A velocity potential is not unique. If is a velocity potential, then is also a velocity potential for , where is a scalar function of time and can be constant. In other words, velocity potentials are unique up to a constant, or a function solely of the temporal variable.
The Laplacian of a velocity potential is equal to the divergence of the corresponding flow. Hence if a velocity potential satisfies Laplace equation, the flow is incompressible.
Unlike a stream function, a velocity potential can exist in three-dimensional flow.
Usage in acoustics
In theoretical acoustics, it is often desirable to work with the acoustic wave equation of the velocity potential instead of pressure and/or particle velocity .
Solving the wave equation for either field or field does not necessarily provide a simple answer for the other field. On the other hand, when is solved for, not only is found as given above, but is also easily found—from the (linearised) Bernoulli equation for irrotational and unsteady flow—as
See also
Vorticity
Hamiltonian fluid mechanics
Potential flow
Potential flow around a circular cylinder
Notes
External links
Joukowski Transform Interactive WebApp
Continuum mechanics
Physical quantities
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Cody's velocity is zero so therefore he doesn't have what?
A. momentum
B. temperature
C. weight
D. mass
Answer:
|
|
sciq-3743
|
multiple_choice
|
What do we call something that pushes or pulls on an object?
|
[
"force",
"reaction",
"annoyance",
"friction"
] |
A
|
Relavent Documents:
Document 0:::
Mechanical load is the physical stress on a mechanical system or component. Loads can be static or dynamic. Some loads are specified as part of the design criteria of a mechanical system. Depending on the usage, some mechanical loads can be measured by an appropriate test method in a laboratory or in the field.
Vehicle
It can be the external mechanical resistance against which a machine (such as a motor or engine), acts. The load can often be expressed as a curve of force versus speed.
For instance, a given car traveling on a road of a given slope presents a load which the engine must act against. Because air resistance increases with speed, the motor must put out more torque at a higher speed in order to maintain the speed. By shifting to a higher gear, one may be able to meet the requirement with a higher torque and a lower engine speed, whereas shifting to a lower gear has the opposite effect. Accelerating increases the load, whereas decelerating decreases the load.
Pump
Similarly, the load on a pump depends on the head against which the pump is pumping, and on the size of the pump.
Fan
Similar considerations apply to a fan. See Affinity laws.
See also
Structural load
Physical test
Document 1:::
Surface force denoted fs is the force that acts across an internal or external surface element in a material body.
Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces.
Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area.
Equations for surface force
Surface force due to pressure
, where f = force, p = pressure, and A = area on which a uniform pressure acts
Examples
Pressure related surface force
Since pressure is , and area is a ,
a pressure of over an area of will produce a surface force of .
See also
Body force
Contact force
Document 2:::
In mechanics, friction torque is the torque caused by the frictional force that occurs when two objects in contact move. Like all torques, it is a rotational force that may be measured in newton meters or pounds-feet.
Engineering
Friction torque can be disruptive in engineering. There are a variety of measures engineers may choose to take to eliminate these disruptions. Ball bearings are an example of an attempt to minimize the friction torque.
Friction torque can also be an asset in engineering. Bolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastened. This is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose.
Examples
When a cyclist applies the brake to the forward wheel, the bicycle tips forward due to the frictional torque between the wheel and the ground.
When a golf ball hits the ground it begins to spin in part because of the friction torque applied to the golf ball from the friction between the golf ball and the ground.
See also
Torque
Force
Engineering
Mechanics
Moment (physics)
Document 3:::
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
This glossary of mechanical engineering terms pertains specifically to mechanical engineering and its sub-disciplines. For a broad overview of engineering, see glossary of engineering.
A
Abrasion – is the process of scuffing, scratching, wearing down, marring, or rubbing away. It can be intentionally imposed in a controlled process using an abrasive. Abrasion can be an undesirable effect of exposure to normal use or exposure to the elements.
Absolute zero – is the lowest possible temperature of a system, defined as zero kelvin or −273.15 °C. No experiment has yet measured a temperature of absolute zero.
Accelerated life testing – is the process of testing a product by subjecting it to conditions (stress, strain, temperatures, voltage, vibration rate, pressure etc.) in excess of its normal service parameters in an effort to uncover faults and potential modes of failure in a short amount of time. By analyzing the product's response to such tests, engineers can make predictions about the service life and maintenance intervals of a product.
Acceleration – In physics, acceleration is the rate of change of velocity of an object with respect to time. An object's acceleration is the net result of any and all forces acting on the object, as described by Newton's Second Law. The SI unit for acceleration is metre per second squared Accelerations are vector quantities (they have magnitude and direction) and add according to the parallelogram law. As a vector, the calculated net force is equal to the product of the object's mass (a scalar quantity) and its acceleration.
Accelerometer – is a device that measures proper acceleration. Proper acceleration, being
Document 4:::
In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force.
For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction.
Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by:
Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy.
History
The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call something that pushes or pulls on an object?
A. force
B. reaction
C. annoyance
D. friction
Answer:
|
|
sciq-11029
|
multiple_choice
|
Catabolic reactions involve breaking what?
|
[
"molecules",
"metals",
"levels",
"bonds"
] |
D
|
Relavent Documents:
Document 0:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
Document 1:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 2:::
Catabolism () is the set of metabolic pathways that breaks down molecules into smaller units that are either oxidized to release energy or used in other anabolic reactions. Catabolism breaks down large molecules (such as polysaccharides, lipids, nucleic acids, and proteins) into smaller units (such as monosaccharides, fatty acids, nucleotides, and amino acids, respectively). Catabolism is the breaking-down aspect of metabolism, whereas anabolism is the building-up aspect.
Cells use the monomers released from breaking down polymers to either construct new polymer molecules or degrade the monomers further to simple waste products, releasing energy. Cellular wastes include lactic acid, acetic acid, carbon dioxide, ammonia, and urea. The formation of these wastes is usually an oxidation process involving a release of chemical free energy, some of which is lost as heat, but the rest of which is used to drive the synthesis of adenosine triphosphate (ATP). This molecule acts as a way for the cell to transfer the energy released by catabolism to the energy-requiring reactions that make up anabolism.
Catabolism is a destructive metabolism and anabolism is a constructive metabolism. Catabolism, therefore, provides the chemical energy necessary for the maintenance and growth of cells. Examples of catabolic processes include glycolysis, the citric acid cycle, the breakdown of muscle protein in order to use amino acids as substrates for gluconeogenesis, the breakdown of fat in adipose tissue to fatty acids, and oxidative deamination of neurotransmitters by monoamine oxidase.
Catabolic hormones
There are many signals that control catabolism. Most of the known signals are hormones and the molecules involved in metabolism itself. Endocrinologists have traditionally classified many of the hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The so-called classic catabolic hormones known since the early 20th century are cortisol, glucagon, and
Document 3:::
The term amphibolic () is used to describe a biochemical pathway that involves both catabolism and anabolism. Catabolism is a degradative phase of metabolism in which large molecules are converted into smaller and simpler molecules, which involves two types of reactions. First, hydrolysis reactions, in which catabolism is the breaking apart of molecules into smaller molecules to release energy. Examples of catabolic reactions are digestion and cellular respiration, where sugars and fats are broken down for energy. Breaking down a protein into amino acids, or a triglyceride into fatty acids, or a disaccharide into monosaccharides are all hydrolysis or catabolic reactions. Second, oxidation reactions involve the removal of hydrogens and electrons from an organic molecule. Anabolism is the biosynthesis phase of metabolism in which smaller simple precursors are converted to large and complex molecules of the cell. Anabolism has two classes of reactions. The first are dehydration synthesis reactions; these involve the joining of smaller molecules together to form larger, more complex molecules. These include the formation of carbohydrates, proteins, lipids and nucleic acids. The second are reduction reactions, in which hydrogens and electrons are added to a molecule. Whenever that is done, molecules gain energy.
The term amphibolic was proposed by B. Davis in 1961 to emphasise the dual metabolic role of such pathways. These pathways are considered to be central metabolic pathways which provide, from catabolic sequences, the intermediates which form the substrate of the metabolic processes.
Reactions exist as amphibolic pathway
All the reactions associated with synthesis of biomolecule converge into the following pathway, viz., glycolysis, the Krebs cycle and the electron transport chain, exist as an amphibolic pathway, meaning that they can function anabolically as well as catabolically.
Other important amphibolic pathways are the Embden-Meyerhof pathway, the pentos
Document 4:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Catabolic reactions involve breaking what?
A. molecules
B. metals
C. levels
D. bonds
Answer:
|
|
sciq-2041
|
multiple_choice
|
What is the term for the maintenance of a steady state despite internal and external changes?
|
[
"equilibrium",
"homeostasis",
"consciousness",
"hypothesis"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). A simple example of such a system is the case of a bathtub with the tap running but with the drain unplugged: after a certain time, the water flows in and out at the same rate, so the water level (the state variable Volume) stabilizes and the system is in a steady state.
The steady state concept is different from chemical equilibrium. Although both may create a situation where a concentration does not change, in a system at chemical equilibrium, the net reaction rate is zero (products transform into reactants at the same rate as reactants transform into products), while no such limitation exists in the steady state concept. Indeed, there does not have to be a reaction at all for a steady state to develop.
The term steady state is also used to describe a situation where some, but not all, of the state variables of a system are constant. For such a steady state to develop, the system does not have to be a flow system. Therefore, such a steady state can develop in a closed system where a series of chemical reactions take place. Literature in chemical kinetics usually refers to this case, calling it steady state approximation.
In simple systems the steady state is approached by state variables gradually decreasing or increasing until they reach their steady state value. In more complex systems state variables might fluctuate around the theoretical steady state either forever (a limit cycle) or gradually coming closer and closer. It theoretically takes an infinite time to reach steady state, just as it takes an infinite time to reach chemical equilibrium.
Both concepts are, however, frequently used approximations because of the substantial mathematical simplifica
Document 1:::
In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so:
In discrete time, it means that the first difference of each property is zero and remains so:
The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various states will be repeated will remain constant. See for example Linear difference equation#Conversion to homogeneous form for the derivation of the steady state.
In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time.
Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability.
In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible.
Applications
Economics
A steady state economy is an economy (es
Document 2:::
In biochemistry, steady state refers to the maintenance of constant internal concentrations of molecules and ions in the cells and organs of living systems. Living organisms remain at a dynamic steady state where their internal composition at both cellular and gross levels are relatively constant, but different from equilibrium concentrations. A continuous flux of mass and energy results in the constant synthesis and breakdown of molecules via chemical reactions of biochemical pathways. Essentially, steady state can be thought of as homeostasis at a cellular level.
Maintenance of steady state
Metabolic regulation achieves a balance between the rate of input of a substrate and the rate that it is degraded or converted, and thus maintains steady state. The rate of metabolic flow, or flux, is variable and subject to metabolic demands. However, in a metabolic pathway, steady state is maintained by balancing the rate of substrate provided by a previous step and the rate that the substrate is converted into product, keeping substrate concentration relatively constant.
Thermodynamically speaking, living organisms are open systems, meaning that they constantly exchange matter and energy with their surroundings. A constant supply of energy is required for maintaining steady state, as maintaining a constant concentration of a molecule preserves internal order and thus is entropically unfavorable. When a cell dies and no longer utilizes energy, its internal composition will proceed toward equilibrium with its surroundings.
In some occurrences, it is necessary for cells to adjust their internal composition in order to reach a new steady state. Cell differentiation, for example, requires specific protein regulation that allows the differentiating cell to meet new metabolic requirements.
ATP
The concentration of ATP must be kept above equilibrium level so that the rates of ATP-dependent biochemical reactions meet metabolic demands. A decrease in ATP will result in a decre
Document 3:::
A glossary of terms relating to systems theory.
A
Adaptive capacity: An important part of the resilience of systems in the face of a perturbation, helping to minimise loss of function in individual human, and collective social and biological systems.
Allopoiesis: The process whereby a system produces something other than the system itself.
Allostasis: The process of achieving stability, or homeostasis, through physiological or behavioral change.
Autopoiesis: The process by which a system regenerates itself through the self-reproduction of its own elements and of the network of interactions that characterize them. An autopoietic system renews, repairs, and replicates or reproduces itself in a flow of matter and energy. Note: from a strictly Maturanian point of view, autopoiesis is an essential property of biological/living systems.
B
Black box: A technical term for a device or system or object when it is viewed primarily in terms of its input and output characteristics, without observing or describing its internal structure or behaviour.
Boundaries: The parametric conditions, often vague, always subjectively stipulated, that delimit and define a system and set it apart from its environment.
C
Cascading failure: Failure in a system of interconnected parts, where the service provided depends on the operation of a preceding part, and the failure of a preceding part can trigger the failure of successive parts.
Closed system: A system which can exchange energy (as heat or work), but not matter, with its surroundings.
Complexity: A complex system is characterised by components that interact in multiple ways and follow local rules. A complicated system is characterised by its layers.
Culture: The result of individual learning processes that distinguish one social group of higher animals from another. In humans culture is the set of interrelated concepts, products and activities through which humans group themselves, interact with each other, and become aware o
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the maintenance of a steady state despite internal and external changes?
A. equilibrium
B. homeostasis
C. consciousness
D. hypothesis
Answer:
|
|
sciq-8530
|
multiple_choice
|
What results when birth rates fall even lower than death rates?
|
[
"disease",
"negative growth rate",
"mutation",
"increased growth rate"
] |
B
|
Relavent Documents:
Document 0:::
The Empty Cradle: How Falling Birthrates Threaten World Prosperity (And What To Do About It) is a 2004 book by Phillip Longman of the New America Foundation about declining birthrates around the world, the challenges that Longman believes will accompany it, and strategies to overcome those challenges.
Reception
Media appearances and interviews
Longman appeared in a National Public Radio program Talk of the Nation debating Peter Kostmayer about the thesis of his book.
Reviews
Richard N. Cooper reviewed the book briefly for Foreign Affairs, writing that Longman's concern about falling birthrates is at odds with many people's concerns about overpopulation, and also noting that Longman believed that specific government policies were responsible for the lower birthrates.
Spengler reviewed the book for Asia Times, concluding "The reader must fall back on his argument that faith, not pecuniary calculation, will motivate today's prospective parents. The reproductive power of an increasingly Christian United States will enhance the strategic position of the US over the next two generations, leaving infertile Western Europe to sink slowly into insignificance."
Albert Mohler, president of the Southern Baptist Theological Seminary, reviewed the book on his personal website. He concluded: "His research is certain to spark fierce debate and spirited discussion. In the final analysis, doesn’t it make sense that those who see children as gifts from God would have more children than those who see children as economic cost units? How could anyone be surprised?"
Bill Muehlenberg reviewed the book on his own blog.
See also
What to Expect When No One's Expecting by Jonathan V. Last
Document 1:::
The Fertility Transition in Iran: Revolution and Reproduction is a 2009 book by Meimanat Hosseini-Chavoshi, Peter McDonald and Mohammad Jalal Abbasi-Shavazi in which the authors examine the fertility rate changes in the Islamic Republic of Iran.
The book was awarded Iran's Book of the Year Award.
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
An underrepresented group describes a subset of a population that holds a smaller percentage within a significant subgroup than the subset holds in the general population. Specific characteristics of an underrepresented group vary depending on the subgroup being considered.
Underrepresented groups in STEM
United States
Underrepresented groups in science, technology, engineering, and mathematics in the United States include women and some minorities. In the United States, women made up 50% of the college-educated workers in 2010, but only 28% of the science and engineering workers. Other underrepresented groups in science and engineering included African Americans, Native Americans, Alaskan Natives, and Hispanics, who collectively formed 26% of the population, but accounted for only 10% of the science and engineering workers. This 2015 study found that women make up just 26% of the computing workforce and 12% of the engineering workforce; African American, Hispanic, and Native American women are especially underrepresented in these industries. (McBride & McBride, 2018).
Underrepresented groups in computing, a subset of the STEM fields, include Hispanics, and African-Americans. In the United States in 2015, Hispanics were 15% of the population and African-Americans were 13%, but their representation in the workforces of major tech companies in technical positions typically runs less than 5% and 3%, respectively. Similarly, women, providing approximately 50% of the general population, typically comprise less than 20% of the technology and leadership positions in the major technology companies. When it comes to the engineering and computing workforce, which accounts for more than 80% of STEM jobs, women remain dramatically underrepresented, as documented in the American Association of University Women's (AAUW) recent research report Solving the Equation: The Variables for Women's Success in Engineering and Computing (McBride & McBride, 2018). Women were underrepres
Document 4:::
Dropping out refers to leaving high school, college, university or another group for practical reasons, necessities, inability, apathy, or disillusionment with the system from which the individual in question leaves.
Canada
In Canada, most individuals graduate from grade 12 by the age of 18, according to Jason Gilmore who collects data on employment and education using the Labour Force Survey. The LFS is the official survey used to collect unemployment data in Canada (2010). Using this tool, assessing educational attainment and school attendance can calculate a dropout rate (Gilmore, 2010). It was found by the LFS that by 2009, one in twelve 20- to 24-year-old adults did not have a high school diploma (Gilmore, 2010). The study also found that men still have higher dropout rates than women, and that students outside of major cities and in the northern territories also have a higher risk of dropping out. Although since 1990 dropout rates have gone down from 20% to a low of 9% in 2010, the rate does not seem to be dropping since this time (2010).
The average Canadian dropout earns $70 less per week than their peers with a high school diploma. Graduates (without post-secondary) earned an average of $621 per week, whereas dropout students earned an average of $551 (Gilmore, 2010).
Even though dropout rates have gone down in the last 20 to 25 years, the concerns of the impact dropping out has on the labour market are very real (Gilmore, 2010). One in four students without a high school diploma who was in the labour market in 2009-2010 had less likelihood of finding a job due to economic downturn (Gilmore, 2010).
In 2018, graduation rates at universities within Canada were as low as 44% (Macleans, 2018). This is almost half of the student population (Macleans, 2018) There tends to be an increase in students dropping-out as a result of feeling disconnected from their school community (Binfet et al., 2016). This is most common with students within their first two year
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What results when birth rates fall even lower than death rates?
A. disease
B. negative growth rate
C. mutation
D. increased growth rate
Answer:
|
|
scienceQA-11139
|
multiple_choice
|
Which of the following organisms is the secondary consumer in this food web?
|
[
"orca",
"sea otter",
"sea urchin",
"phytoplankton"
] |
B
|
Secondary consumers eat primary consumers, and primary consumers eat producers. So, in a food web, secondary consumers have arrows pointing to them from primary consumers. Primary consumers have arrows pointing to them from producers.
The phytoplankton does not have any arrows pointing to it. So, the phytoplankton is not a secondary consumer.
The sea urchin has an arrow pointing to it from the kelp. The kelp is not a primary consumer. So, the sea urchin is not a secondary consumer.
The sea otter has an arrow pointing to it from the sea urchin. The sea urchin is a primary consumer, so the sea otter is a secondary consumer.
The orca has an arrow pointing to it from the sea otter. The sea otter is not a primary consumer. So, the orca is not a secondary consumer.
The kelp bass has arrows pointing to it from the zooplankton and the plainfin midshipman. The zooplankton and the plainfin midshipman are primary consumers, so the kelp bass is a secondary consumer.
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 2:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 3:::
A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host.
Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins.
A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi
Document 4:::
A piscivore () is a carnivorous animal that primarily eats fish. The name piscivore is derived . Piscivore is equivalent to the Greek-derived word ichthyophage, both of which mean "fish eater". Fish were the diet of early tetrapod evolution (via water-bound amphibians during the Devonian period); insectivory came next; then in time, the more terrestrially adapted reptiles and synapsids evolved herbivory.
Almost all predatory fishes (most sharks, tuna, billfishes, pikes etc.) are obligated piscivores. Some non-piscine aquatic animals, such as whales, sea lions, and crocodilians, are not completely piscivorous; often also preying on invertebrates, marine mammals, waterbirds and even wading land animals in addition to fish, while others, such as the bulldog bat and gharial, are strictly dependent on fish for food. Some creatures, including cnidarians, octopuses, squid, spiders, cetaceans, grizzly bears, jaguars, wolves, snakes, turtles and sea gulls, may have fish as significant if not dominant portions of their diets. Humans can live on fish-based diets, as can their carnivorous domesticated pets such as dogs and cats.
The ecological effects of piscivores can extend to other food chains. In a study of cutthroat trout stocking, researchers found that the addition of this piscivore can have noticeable effects on non-aquatic organisms, in this case bats feeding on insects emerging from the water with the trout.
Another study done on lionfish removal to maintain low densities used piscivore densities as a biological indicator for coral reef success.
There exist classifications of primary and secondary piscivores. Primary piscivores, also known as "specialists", shift to this habit in the first few months of their lives. Secondary piscivores will move to eating primarily fish later in their lifetime. It is hypothesized that the secondary piscivores' diet change is due to an adaptation to maintain efficiency in their use of energy while growing.
Examples of extant pis
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following organisms is the secondary consumer in this food web?
A. orca
B. sea otter
C. sea urchin
D. phytoplankton
Answer:
|
sciq-3173
|
multiple_choice
|
What kind of reproduction results in offspring that are genetically unique?
|
[
"budding",
"asexual reproduction",
"fragmentation",
"sexual reproduction"
] |
D
|
Relavent Documents:
Document 0:::
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid ce
Document 1:::
In biology, offspring are the young creation of living organisms, produced either by a single organism or, in the case of sexual reproduction, two organisms. Collective offspring may be known as a brood or progeny in a more general way. This can refer to a set of simultaneous offspring, such as the chicks hatched from one clutch of eggs, or to all the offspring, as with the honeybee.
Human offspring (descendants) are referred to as children (without reference to age, thus one can refer to a parent's "minor children" or "adult children" or "infant children" or "teenage children" depending on their age); male children are sons and female children are daughters (see kinship). Offspring can occur after mating or after artificial insemination.
Overview
Offspring contains many parts and properties that are precise and accurate in what they consist of, and what they define. As the offspring of a new species, also known as a child or f1 generation, consist of genes of the father and the mother, which is also known as the parent generation. Each of these offspring contains numerous genes which have coding for specific tasks and properties. Males and females both contribute equally to the genotypes of their offspring, in which gametes fuse and form. An important aspect of the formation of the parent offspring is the chromosome, which is a structure of DNA which contains many genes.
To focus more on the offspring and how it results in the formation of the f1 generation, is an inheritance called sex linkage, which is a gene located on the sex chromosome, and patterns of this inheritance differ in both male and female. The explanation that proves the theory of the offspring having genes from both parent generations is proven through a process called crossing over, which consists of taking genes from the male chromosomes and genes from the female chromosome, resulting in a process of meiosis occurring, and leading to the splitting of the chromosomes evenly. Depending on which
Document 2:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 3:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 4:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of reproduction results in offspring that are genetically unique?
A. budding
B. asexual reproduction
C. fragmentation
D. sexual reproduction
Answer:
|
|
sciq-478
|
multiple_choice
|
The development of a head region is called what?
|
[
"cephalization",
"spore",
"cocklebur",
"trichina"
] |
A
|
Relavent Documents:
Document 0:::
The protocerebrum is the first segment of the panarthropod brain.
Recent studies suggest that it comprises two regions.
Region associated with the expression of six3
six3 is a transcription factor that marks the anteriormost part of the developing body in a whole host of Metazoa.
In the panarthropod brain, the anteriormost (rostralmost) part of the germband expresses six3. This region is described as medial, and corresponds to the annelid prostomium.
In arthropods, it contains the pars intercerebralis and pars lateralis.
six3 is associated with the euarthropod labrum and the onychophoran frontal appendages (antennae).
Region associated with the expression of orthodenticle
The other region expresses homologues of orthodenticle, Otx or otd. This region is more caudal and lateral, and bears the eyes.
Orthodenticle is associated with the protocerebral bridge, part of the central complex, traditionally a marker of the prosocerebrum.
In the annelid brain, Otx expression characterises the peristomium, but also creeps forwards into the regions of the prostomium that bear the larval eyes.
Names of regions
Inconsistent use of the terms archicerebrum and the prosocerebrum makes them confusing.
The regions were defined by Siewing (1963): the archicerebrum as containing the ocular lobes and the mushroom bodies (= corpora pedunculata), and the prosocerebrum as comprising the central complex.
The archicerebrum has traditionally been equated with the anteriormost, 'non-segmental' part of the protocerebrum, equivalent to the acron in older terminology.
The prosocerebrum is then equivalent to the 'segmental' part of the protocerebrum, bordered by segment polarity genes such as engrailed, and (on one interpretation) bearing modified segmental appendages (= camera-type eyes).
But Urbach and Technau (2003) complicate the matter by seeing the prosocerebrum (central complex) + labrum as the anteriormost region
Strausfeld 2016 identifies the anteriormost part of the b
Document 1:::
Cephalization is an evolutionary trend in which, over many generations, the mouth, sense organs, and nerve ganglia become concentrated at the front end of an animal, producing a head region. This is associated with movement and bilateral symmetry, such that the animal has a definite head end. This led to the formation of a highly sophisticated brain in three groups of animals, namely the arthropods, cephalopod molluscs, and vertebrates.
Animals without bilateral symmetry
Cnidaria, such as the radially symmetrical Hydrozoa, show some degree of cephalization. The Anthomedusae have a head end with their mouth, photoreceptive cells, and a concentration of neural cells.
Bilateria
Cephalization is a characteristic feature of the Bilateria, a large group containing the majority of animal phyla. These have the ability to move, using muscles, and a body plan with a front end that encounters stimuli first as the animal moves forwards, and accordingly has evolved to contain many of the body's sense organs, able to detect light, chemicals, and gravity. There is often also a collection of nerve cells able to process the information from these sense organs, forming a brain in several phyla and one or more ganglia in others.
Acoela
The Acoela are basal bilaterians, part of the Xenacoelomorpha. They are small and simple animals, and have very slightly more nerve cells at the head end than elsewhere, not forming a distinct and compact brain. This represents an early stage in cephalization.
Flatworms
The Platyhelminthes (flatworms) have a more complex nervous system than the Acoela, and are lightly cephalized, for instance having an eyespot above the brain, near the front end.
Complex active bodies
The philosopher Michael Trestman noted that three bilaterian phyla, namely the arthropods, the molluscs in the shape of the cephalopods, and the chordates, were distinctive in having "complex active bodies", something that the acoels and flatworms did not have. Any such animal, whe
Document 2:::
This glossary of developmental biology is a list of definitions of terms and concepts commonly used in the study of developmental biology and related disciplines in biology, including embryology and reproductive biology, primarily as they pertain to vertebrate animals and particularly to humans and other mammals. The developmental biology of invertebrates, plants, fungi, and other organisms is treated in other articles; e.g. terms relating to the reproduction and development of insects are listed in Glossary of entomology, and those relating to plants are listed in Glossary of botany.
This glossary is intended as introductory material for novices; for more specific and technical detail, see the article corresponding to each term. Additional terms relevant to vertebrate reproduction and development may also be found in Glossary of biology, Glossary of cell biology, Glossary of genetics, and Glossary of evolutionary biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Introduction to developmental biology
Outline of developmental biology
Outline of cell biology
Glossary of biology
Glossary of cell biology
Glossary of genetics
Glossary of evolutionary biology
Document 3:::
In medicine, heterotopia is the presence of a particular tissue type at a non-physiological site, but usually co-existing with original tissue in its correct anatomical location. In other words, it implies ectopic tissue, in addition to retention of the original tissue type.
Examples
In neuropathology, for example, gray matter heterotopia is the presence of gray matter within the cerebral white matter or ventricles. Heterotopia within the brain is often divided into three groups: subependymal heterotopia, focal cortical heterotopia and band heterotopia. Another example is a Meckel's diverticulum, which may contain heterotopic gastric or pancreatic tissue.
In biology specifically, heterotopy refers to an altered location of trait expression. In her book Developmental Plasticity and Evolution, Mary-Jane West Eberhard has a cover art of the sulphur crested cockatoo and comments on the back cover "Did long crest[head] feathers evolve by gradual modification of ancestral head feathers? Or are they descendants of wing feathers, developmentally transplanted onto the head". This idea sets the tone for the rest of her book which goes into depth about developmental novelties and their relation to evolution. Heterotopy is a somewhat obscure but well demonstrated example of how developmental change can lead to novel forms. The central concept is that a feature seen in one area of an organism has had its location changed in evolutionary lineages.
Heterotopy in molecular biology
Heterotopy in molecular biology is the name given to the expression or placement of a gene product from what is typically found in one area to another area. It can also be further expanded to a subtle form of exaptation where a gene product used for one underlying purpose in a diverse group of organisms can re-emerge repeatedly to produce seemingly paraphyletic distributions of traits. But actual phylogenetic analysis supports a monophyletic model as does evolutionary theory. Heterotopy is used to ex
Document 4:::
In anatomy, a lobe is a clear anatomical division or extension of an organ (as seen for example in the brain, lung, liver, or kidney) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule, which is a clear division only visible under the microscope.
Interlobar ducts connect lobes and interlobular ducts connect lobules.
Examples of lobes
The four main lobes of the brain
the frontal lobe
the parietal lobe
the occipital lobe
the temporal lobe
The three lobes of the human cerebellum
the flocculonodular lobe
the anterior lobe
the posterior lobe
The two lobes of the thymus
The two and three lobes of the lungs
Left lung: superior and inferior
Right lung: superior, middle, and inferior
The four lobes of the liver
Left lobe of liver
Right lobe of liver
Quadrate lobe of liver
Caudate lobe of liver
The renal lobes of the kidney
Earlobes
Examples of lobules
the cortical lobules of the kidney
the testicular lobules of the testis
the lobules of the mammary gland
the pulmonary lobules of the lung
the lobules of the thymus
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The development of a head region is called what?
A. cephalization
B. spore
C. cocklebur
D. trichina
Answer:
|
|
sciq-1917
|
multiple_choice
|
How many centimeters are in a meter?
|
[
"50",
"200",
"100",
"1000"
] |
C
|
Relavent Documents:
Document 0:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Mathematics education in the United States varies considerably from one state to the next, and even within a single state. However, with the adoption of the Common Core Standards in most states and the District of Columbia beginning in 2010, mathematics content across the country has moved into closer agreement for each grade level. The SAT, a standardized university entrance exam, has been reformed to better reflect the contents of the Common Core. However, many students take alternatives to the traditional pathways, including accelerated tracks. As of 2023, twenty-seven states require students to pass three math courses before graduation from high school, and seventeen states and the District of Columbia require four.
Compared to other developed countries in the Organisation for Economic Co-operation and Development (OECD), the average level of mathematical literacy of American students is mediocre. As in many other countries, math scores dropped even further during the COVID-19 pandemic. Secondary-school algebra proves to be the turning point of difficulty many students struggle to surmount, and as such, many students are ill-prepared for collegiate STEM programs, or future high-skilled careers. Meanwhile, the number of eighth-graders enrolled in Algebra I has fallen between the early 2010s and early 2020s. Across the United States, there is a shortage of qualified mathematics instructors. Despite their best intentions, parents may transmit their mathematical anxiety to their children, who may also have school teachers who fear mathematics. About one in five American adults are functionally innumerate. While an overwhelming majority agree that mathematics is important, many, especially the young, are not confident of their own mathematical ability.
Curricular content and standards
Each U.S. state sets its own curricular standards, and details are usually set by each local school district. Although there are no federal standards, since 2015 most states have bas
Document 3:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 4:::
Additional Mathematics is a qualification in mathematics, commonly taken by students in high-school (or GCSE exam takers in the United Kingdom). It features a range of problems set out in a different format and wider content to the standard Mathematics at the same level.
Additional Mathematics in Singapore
In Singapore, Additional Mathematics is an optional subject offered to pupils in secondary school—specifically those who have an aptitude in Mathematics and are in the Normal (Academic) stream or Express stream. The syllabus covered is more in-depth as compared to Elementary Mathematics, with additional topics including Algebra binomial expansion, proofs in plane geometry, differential calculus and integral calculus. Additional Mathematics is also a prerequisite for students who are intending to offer H2 Mathematics and H2 Further Mathematics at A-level (if they choose to enter a Junior College after secondary school). Students without Additional Mathematics at the 'O' level will usually be offered H1 Mathematics instead.
Examination Format
The syllabus was updated starting with the 2021 batch of candidates. There are two written papers, each comprising half of the weightage towards the subject. Each paper is 2 hours 15 minutes long and worth 90 marks. Paper 1 has 12 to 14 questions, while Paper 2 has 9 to 11 questions. Generally, Paper 2 would have a graph plotting question based on linear law.
GCSE Additional Mathematics in Northern Ireland
In Northern Ireland, Additional Mathematics was offered as a GCSE subject by the local examination board, CCEA. There were two examination papers: one which tested topics in Pure Mathematics, and one which tested topics in Mechanics and Statistics. It was discontinued in 2014 and replaced with GCSE Further Mathematics—a new qualification whose level exceeds both those offered by GCSE Mathematics, and the analogous qualifications offered in England.
Further Maths IGCSE and Additional Maths FSMQ in England
Starting from
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many centimeters are in a meter?
A. 50
B. 200
C. 100
D. 1000
Answer:
|
|
sciq-6207
|
multiple_choice
|
The smallest units of matter that retain the unique properties of an element are known as what?
|
[
"protons",
"molecules",
"atoms",
"neutrons"
] |
C
|
Relavent Documents:
Document 0:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 1:::
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 .
In the SI system of units, the value of the elementary charge is exactly defined as = coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one.
In the centimetre–gram–second system of units (CGS), the corresponding quantity is .
Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865.
As a unit
In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron.
In other natural unit systems, the unit of charge is defined as with the result that
where is the fine-structure constant, is the speed of light, is
Document 2:::
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed.
Units
The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961.
Molecular mass
The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr.
Average mass
The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da.
Mass number
The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons.
Nominal mass
The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon i
Document 3:::
The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ).
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused w
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The smallest units of matter that retain the unique properties of an element are known as what?
A. protons
B. molecules
C. atoms
D. neutrons
Answer:
|
|
sciq-5630
|
multiple_choice
|
Bones are far from static, or unchanging. instead, they are what?
|
[
"continuous",
"stable",
"fluid",
"dynamic"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The American Society of Biomechanics (ASB) is a scholarly society that focuses on biomechanics across a variety of academic fields. It was founded in 1977 by a group of scientists and clinicians. The ASB holds an annual conference as an arena to disseminate and learn about the most recent progress in the field, to distribute awards to recognize excellent work, and to engage in public outreach to expand the impact of its members.
Conferences
The society hosts an annual conference that takes place in North America (usually USA). These conferences are periodically joint conferences held in conjunction with the International Society of Biomechanics (ISB), the North American Congress on Biomechanics (NACOB), and the World Congress of Biomechanics (WCB). The annual conference, when not partnered with another conference, receives around 700 to 800 abstract submissions per year, with attendees in approximately the same numbers. The first conference was held in 1977.
Often, work presented at these conferences achieves media attention due to the ‘public interest’ nature of the findings or that new devices are introduced there. Examples include:
the effect of tablet reading on cervical spine posture;
the squeak of the basketball shoe;
‘underwear’ to address back-pain;
recovery after exercise;
exoskeleton boots for joint pain during exercise;
how flamingos stand on one leg.
National Biomechanics Day
The ASB is instrumental in promoting National Biomechanics Day (NBD), which has received international recognition.
In New Zealand, Massey University attracted NZ$48,000 of national funding
through the Unlocking Curious Minds programme to promote National Biomechanics Day, with the aim to engage 1,100 students from lower-decile schools in an experiential learning day focused on the science of biomechanics.
It was first held in 2016 on April 7, and consisted of ‘open house’ visits from middle and high school students to biomechanics research and teaching laboratories a
Document 2:::
Kinanthropometry is defined as the study of human size, shape, proportion, composition, maturation, and gross function, in order to understand growth, exercise, performance, and nutrition.
It is a scientific discipline that is concerned with the measurement of individuals in a variety of morphological perspectives, its application to movement and those factors which influence movement, including: components of body build, body measurements, proportions, composition, shape and maturation; motor abilities and cardiorespiratory capacities; physical activity including recreational activity as well as highly specialized sports performance. The predominant focus is upon obtaining detailed measurements upon the body composition of a given person.
Kinanthropometry is the interface between human anatomy and movement. It is the application of a series of measurements made on the body and from these we can use the data that we gather directly or perform calculations using the data to produce various indices and body composition predictions and to measure and describe physique.
Kinanthropometry is an unknown word for many people except those inside the field of sport science. Describing the etymology of the word kinanthropometry can help illustrate simply what you are going to talk about. However, if you have to say just a few sentences about the general scope of it, some problems will arise immediately. Is it a science? Why are its central definitions so ambiguous and various? For what really matter the kinanthropometric assessment. And so on.
Defining a particular aim for kinanthropometry is central for its full understanding. Ross et al. (1972) said “K is a scientific discipline that studies the body size, the proportionality, the performance of movement, the body composition and principal functions of the body. This so well cited definition is not completely exact as the last four words show. What are the kinanthropometric methods that truly tell us something about prin
Document 3:::
The Mechanostat is a term describing the way in which mechanical loading influences bone structure by changing the mass (amount of bone) and architecture (its arrangement) to provide a structure that resists habitual loads with an economical amount of material. As changes in the skeleton are accomplished by the processes of formation (bone growth) and resorption (bone loss), the mechanostat models the effect of influences on the skeleton by those processes, through their effector cells, osteocytes, osteoblasts, and osteoclasts. The term was invented by Harold Frost: an orthopaedic surgeon and researcher described extensively in articles referring to Frost and Webster Jee's Utah Paradigm of Skeletal Physiology in the 1960s. The Mechanostat is often defined as a practical description of Wolff's law described by Julius Wolff (1836–1902), but this is not completely accurate. Wolff wrote his treatises on bone after images of bone sections were described by Culmann and von Meyer, who suggested that the arrangement of the struts (trabeculae) at the ends of the bones were aligned with the stresses experienced by the bone. It has since been established that the static methods used for those calculations of lines of stress were inappropriate for work on what were, in effect, curved beams, a finding described by Lance Lanyon, a leading researcher in the area as "a triumph of a good idea over mathematics." While Wolff pulled together the work of Culmann and von Meyer, it was the French scientist Roux, who first used the term "functional adaptation" to describe the way that the skeleton optimized itself for its function, though Wolff is credited by many for that.
According to the Mechanostat, bone growth and bone loss is stimulated by the local, mechanical, elastic deformation of bone. The reason for the elastic deformation of bone is the peak forces caused by muscles (e.g. measurable using mechanography). The adaptation (feed-back control loop) of bone according to the maximu
Document 4:::
Kinaesthetics (or kinesthetics, in American English) is the study of body motion, and of the perception (both conscious and unconscious) of one's own body motions. Kinesthesis is the learning of movements that an individual commonly performs. The individual must repeat the motions that they are trying to learn and perfect many times for this to happen. While kinesthesis may be described as "muscle memory", muscles do not store memory; rather, it is the proprioceptors giving the information from muscles to the brain. To do this, the individual must have a sense of the position of their body and how that changes throughout the motor skill they are trying to perform. While performing the motion the body will use receptors in the muscles to transfer information to the brain to tell the brain about what the body is doing. Then after completing the same motor skill numerous times, the brain will begin to remember the motion based on the position of the body at a given time. Then after learning the motion the body will be able to perform the motor skill even when usual senses are inhibited, such as the person closing their eyes. The body will perform the motion based on the information that is stored in the brain from previous attempts at the same movement. This is possible because the brain has formed connections between the location of body parts in space (the body uses perception to learn where their body is in space) and the subsequent movements that commonly follow these positions. It becomes almost an instinct. The person does not need to even think about what they are doing to perfect the skill; they have done it so many times that it feels effortless and requires little to no thought. When the kinesthetic system has learned a motor skill proficiently, it will be able to work even when one's vision is limited. The perception of continuous movement (kinesthesia) is largely unconscious. A conscious proprioception is achieved through increased awareness. Kinaesthetics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Bones are far from static, or unchanging. instead, they are what?
A. continuous
B. stable
C. fluid
D. dynamic
Answer:
|
|
sciq-1913
|
multiple_choice
|
Common forms of what include light, chemical and heat, along with kinetic and potential?
|
[
"pressure",
"reactions",
"energy",
"fuel"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 3:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Common forms of what include light, chemical and heat, along with kinetic and potential?
A. pressure
B. reactions
C. energy
D. fuel
Answer:
|
|
sciq-9528
|
multiple_choice
|
Who has proposed that cigarette advertising in all media be banned entirely?
|
[
"parents",
"schools",
"teachers",
"antismoking groups"
] |
D
|
Relavent Documents:
Document 0:::
The scientific community in the United States and Europe are primarily concerned with the possible effect of electronic cigarette use on public health. There is concern among public health experts that e-cigarettes could renormalize smoking, weaken measures to control tobacco, and serve as a gateway for smoking among youth. The public health community is divided over whether to support e-cigarettes, because their safety and efficacy for quitting smoking is unclear. Many in the public health community acknowledge the potential for their quitting smoking and decreasing harm benefits, but there remains a concern over their long-term safety and potential for a new era of users to get addicted to nicotine and then tobacco. There is concern among tobacco control academics and advocates that prevalent universal vaping "will bring its own distinct but as yet unknown health risks in the same way tobacco smoking did, as a result of chronic exposure", among other things.
Medical organizations differ in their views about the health implications of vaping and avoid releasing statements about the relative toxicity of electronic cigarettes because of the many different device types, liquid formulations, and new devices that come onto the market. Some healthcare groups and policy makers have hesitated to recommend e-cigarettes with nicotine for quitting smoking, despite some evidence of effectiveness (when compared to Nicotine Replacement Therapy or e-cigarettes without nicotine) and safety. Reasons for hesitancy include challenges ensuring that quality control measures on the devices and liquids are met, unknown second hand vapour inhalation effects, uncertainty about EC use leading to the initiation of smoking or effects on people new to smoking who develop nicotine dependence, unknown long-term effects of electronic cigarette use on human health, uncertainty about the effects of ECs on smoking regulations and smoke free legislation measures, and uncertainty about involvement of
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Who has proposed that cigarette advertising in all media be banned entirely?
A. parents
B. schools
C. teachers
D. antismoking groups
Answer:
|
|
ai2_arc-593
|
multiple_choice
|
In certain species of plants, purple flowers (P) are dominant to white flowers (p). If two heterozygous plants are crossed, what will be the phenotype of the offspring?
|
[
"100% purple flowers",
"100% white flowers",
"75% white flowers, 25% purple flowers",
"25% white flowers, 75% purple flowers"
] |
D
|
Relavent Documents:
Document 0:::
A monohybrid cross is a cross between two organisms with different variations at one genetic locus of interest. The character(s) being studied in a monohybrid cross are governed by two or multiple variations for a single location of a gene.
Then carry out such a cross, each parent is chosen to be homozygous or true breeding for a given trait (locus). When a cross satisfies the conditions for a monohybrid cross, it is usually detected by a characteristic distribution of second-generation (F2) offspring that is sometimes called the monohybrid ratio.
Usage
Generally, the monohybrid cross is used to determine the dominance relationship between two alleles. The cross begins with the parental generation. One parent is homozygous for one allele, and the other parent is homozygous for the other allele. The offspring make up the first filial (F1) generation. Every member of the F1 generation is heterozygous and the phenotype of the F1 generation expresses the dominant trait. Crossing two members of the F1 generation produces the second filial (F2) generation. Probability theory predicts that three quarters of the F2 generation will have the dominant allele's phenotype. And the remaining quarter of the F2s will have the recessive allele's phenotype. This predicted 3:1 phenotypic ratio assumes Mendelian inheritance.
Mendel's experiment with peas (Pisum sativum)
Gregor Mendel (1822–1884) was an Austrian monk who theorized basic rules of inheritance. From 1858 to 1866, he bred garden peas (Pisum sativum) in his monastery garden and analyzed the offspring of these matings. The garden pea was chosen as an experimental organism because
many varieties were available that bred true for qualitative traits and their pollination could be manipulated. The seven variable characteristics Mendel investigated in pea plants were.
seed texture (round vs wrinkled)
seed color (yellow vs green)
flower color (white vs purple)
growth habit (tall vs dwarf)
pod shape (pinched or inf
Document 1:::
Quantitative trait loci mapping or QTL mapping is the process of identifying genomic regions that potentially contain genes responsible for important economic, health or environmental characters. Mapping QTLs is an important activity that plant breeders and geneticists routinely use to associate potential causal genes with phenotypes of interest. Family-based QTL mapping is a variant of QTL mapping where multiple-families are used.
Pedigree in humans and wheat
Pedigree information include information about ancestry. Keeping pedigree records is a centuries-old tradition. Pedigrees can also be verified using gene-marker data.
In plants
The method has been discussed in the context of plant breeding populations. Pedigree records are kept by plants breeders and pedigree-based selection is popular in several plant species. Plant pedigrees are different from that of humans, particularly as plant are hermaphroditic – an individual can be male or female and mating can be performed in random combinations, with inbreeding loops. Also plant pedigrees may contain of "selfs", i.e. offspring resulting from self-pollination of a plant.
Pedigree denotation
SIMPLE CROSS SYMBOL Example
/ first order cross SON 64/KLRE
//, second order cross IR 64/KLRE // CIAN0
/3/, third order cross TOBS /3/ SON 64/KLRE // CIAN0
/4/, fourth order cross TOBS /3/ SON 64/KLRE // CIAN0 /4/ SEE
/n/, nth order cross
BACK CROSS SYMBOL
*n n number of times the back cross parent used
left side simple cross symbol,
back cross parent is the female,
right side – male,
Document 2:::
Under the law of dominance in genetics, an individual expressing a dominant phenotype could contain either two copies of the dominant allele (homozygous dominant) or one copy of each dominant and recessive allele (heterozygous dominant). By performing a test cross, one can determine whether the individual is heterozygous or homozygous dominant.
In a test cross, the individual in question is bred with another individual that is homozygous for the recessive trait and the offspring of the test cross are examined. Since the homozygous recessive individual can only pass on recessive alleles, the allele the individual in question passes on determines the phenotype of the offspring. Thus, this test yields 2 possible situations:
If any of the offspring produced express the recessive trait, the individual in question is heterozygous for the dominant allele.
If all of the offspring produced express the dominant trait, the individual in question is homozygous for the dominant allele.
History
The first uses of test crosses were in Gregor Mendel’s experiments in plant hybridization. While studying the inheritance of dominant and recessive traits in pea plants, he explains that the “signification” (now termed zygosity) of an individual for a dominant trait is determined by the expression patterns of the following generation.
Rediscovery of Mendel’s work in the early 1900s led to an explosion of experiments employing the principles of test crosses. From 1908-1911, Thomas Hunt Morgan conducted test crosses while determining the inheritance pattern of a white eye-colour mutation in Drosophila. These test cross experiments became hallmarks in the discovery of sex-linked traits.
Applications in model organisms
Test crosses have a variety of applications. Common animal organisms, called model organisms, where test crosses are often used include Caenorhabditis elegans and Drosophila melanogaster. Basic procedures for performing test crosses in these organisms are provided belo
Document 3:::
In genetics, dominance is the phenomenon of one variant (allele) of a gene on a chromosome masking or overriding the effect of a different variant of the same gene on the other copy of the chromosome. The first variant is termed dominant and the second is called recessive. This state of having two different variants of the same gene on each chromosome is originally caused by a mutation in one of the genes, either new (de novo) or inherited. The terms autosomal dominant or autosomal recessive are used to describe gene variants on non-sex chromosomes (autosomes) and their associated traits, while those on sex chromosomes (allosomes) are termed X-linked dominant, X-linked recessive or Y-linked; these have an inheritance and presentation pattern that depends on the sex of both the parent and the child (see Sex linkage). Since there is only one copy of the Y chromosome, Y-linked traits cannot be dominant or recessive. Additionally, there are other forms of dominance, such as incomplete dominance, in which a gene variant has a partial effect compared to when it is present on both chromosomes, and co-dominance, in which different variants on each chromosome both show their associated traits.
Dominance is a key concept in Mendelian inheritance and classical genetics. Letters and Punnett squares are used to demonstrate the principles of dominance in teaching, and the use of upper-case letters for dominant alleles and lower-case letters for recessive alleles is a widely followed convention. A classic example of dominance is the inheritance of seed shape in peas. Peas may be round, associated with allele R, or wrinkled, associated with allele r. In this case, three combinations of alleles (genotypes) are possible: RR, Rr, and rr. The RR (homozygous) individuals have round peas, and the rr (homozygous) individuals have wrinkled peas. In Rr (heterozygous) individuals, the R allele masks the presence of the r allele, so these individuals also have round peas. Thus, allele R is d
Document 4:::
Dihybrid cross is a cross between two individuals with two observed traits that are controlled by two distinct genes. The idea of a dihybrid cross came from Gregor Mendel when he observed pea plants that were either yellow or green and either round or wrinkled. Crossing of two heterozygous individuals will result in predictable ratios for both genotype and phenotype in the offspring. The expected phenotypic ratio of crossing heterozygous parents would be 9:3:3:1. Deviations from these expected ratios may indicate that the two traits are linked or that one or both traits has a non-Mendelian mode of inheritance.
Mendelian History
Gregor Mendel was a Czech monk who bred peas plants in his monastery garden and compared the offspring to figure out inheritance of traits from 1856-1863. He first started looking at individual traits, but began to look at two distinct traits in the same plant. In his first experiment, he looked at the two distinct traits of pea color (yellow or green) and pea shape (round or wrinkled). He applied the same rules of a monohybrid cross to create the dihybrid cross. From these experiments, he determined the phenotypic ratio (9:3:3:1) seen in dihybrid cross for a heterozygous cross.
Through these experiments, he was able to determine the basic law of independent assortment and law of dominance. The law of independent assortment states that traits controlled by different genes are going to be inherited independently of each other. Mendel was able to determine this law out because in his crosses he was able to get all four possible phenotypes. The law of dominance states that if one dominant allele is inherited then the dominant phenotype will be expressed.
Expected genotype and phenotype ratios
The phenotypic ratio of a cross between two heterozygotes is 9:3:3:1, where 9/16 of the individuals possess the dominant phenotype for both traits, 3/16 of the individuals possess the dominant phenotype for one trait, 3/16 of the individuals possess t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In certain species of plants, purple flowers (P) are dominant to white flowers (p). If two heterozygous plants are crossed, what will be the phenotype of the offspring?
A. 100% purple flowers
B. 100% white flowers
C. 75% white flowers, 25% purple flowers
D. 25% white flowers, 75% purple flowers
Answer:
|
|
sciq-11428
|
multiple_choice
|
What is the small, dense region at the center of the atom that consists of positive protons and neutral neutrons?
|
[
"electron",
"proton",
"photon",
"nucleus"
] |
D
|
Relavent Documents:
Document 0:::
The protonosphere is a layer of the Earth's atmosphere (or any planet with a similar atmosphere) where the dominant components are atomic hydrogen and ionic hydrogen (protons). It is the outer part of the ionosphere, and extends to the interplanetary medium. Hydrogen dominates in the outermost layers because it is the lightest gas, and in the heterosphere, mixing is not strong enough to overcome differences in constituent gas densities. Charged particles are created by incoming ionizing radiation, mostly from solar radiation.
Document 1:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 2:::
An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to
Document 3:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 4:::
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry.
Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge.
Elementary definition
Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition.
A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the small, dense region at the center of the atom that consists of positive protons and neutral neutrons?
A. electron
B. proton
C. photon
D. nucleus
Answer:
|
|
ai2_arc-542
|
multiple_choice
|
Which is a true statement regarding the graphing of data?
|
[
"It is always better to leave data in a table than to graph it.",
"Bar graphs are the best type of graphs for scientific data.",
"For any given set of data, there is only one correct graph or way to display it.",
"Data can be displayed in many types of graphs in order to show different things about the data."
] |
D
|
Relavent Documents:
Document 0:::
A chart (sometimes known as a graph) is a graphical representation for data visualization, in which "the data is represented by symbols, such as bars in a bar chart, lines in a line chart, or slices in a pie chart". A chart can represent tabular numeric data, functions or some kinds of quality structure and provides different info.
The term "chart" as a graphical representation of data has multiple meanings:
A data chart is a type of diagram or graph, that organizes and represents a set of numerical or qualitative data.
Maps that are adorned with extra information (map surround) for a specific purpose are often known as charts, such as a nautical chart or aeronautical chart, typically spread over several map sheets.
Other domain-specific constructs are sometimes called charts, such as the chord chart in music notation or a record chart for album popularity.
Charts are often used to ease understanding of large quantities of data and the relationships between parts of the data. Charts can usually be read more quickly than the raw data. They are used in a wide variety of fields, and can be created by hand (often on graph paper) or by computer using a charting application. Certain types of charts are more useful for presenting a given data set than others. For example, data that presents percentages in different groups (such as "satisfied, not satisfied, unsure") are often displayed in a pie chart, but maybe more easily understood when presented in a horizontal bar chart. On the other hand, data that represents numbers that change over a period of time (such as "annual revenue from 1990 to 2000") might be best shown as a line chart1
Features
A chart can take a large variety of forms. However, there are common features that provide the chart with its ability to extract meaning from data.
Typically the data in a chart is represented graphically since humans can infer meaning from pictures more quickly than from text. Thus, the text is generally used only to annota
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
This is a list of graphical methods with a mathematical basis.
Included are diagram techniques, chart techniques, plot techniques, and other forms of visualization.
There is also a list of computer graphics and descriptive geometry topics.
Simple displays
Area chart
Box plot
Dispersion fan diagram
Graph of a function
Logarithmic graph paper
Heatmap
Bar chart
Histogram
Line chart
Pie chart
Plotting
Scatterplot
Sparkline
Stemplot
Radar chart
Set theory
Venn diagram
Karnaugh diagram
Descriptive geometry
Isometric projection
Orthographic projection
Perspective (graphical)
Engineering drawing
Technical drawing
Graphical projection
Mohr's circle
Pantograph
Circuit diagram
Smith chart
Sankey diagram
Systems analysis
Binary decision diagram
Control-flow graph
Functional flow block diagram
Information flow diagram
IDEF
N2 chart
Sankey diagram
State diagram
System context diagram
Data-flow diagram
Cartography
Map projection
Orthographic projection (cartography)
Robinson projection
Stereographic projection
Dymaxion map
Topographic map
Craig retroazimuthal projection
Hammer retroazimuthal projection
Biological sciences
Cladogram
Punnett square
Systems Biology Graphical Notation
Physical sciences
Free body diagram
Greninger chart
Phase diagram
Wavenumber-frequency diagram
Bode plot
Nyquist plot
Dalitz plot
Feynman diagram
Carnot Plot
Business methods
Flowchart
Workflow
Gantt chart
Growth-share matrix (often called BCG chart)
Work breakdown structure
Control chart
Ishikawa diagram
Pareto chart (often used to prioritise outputs of an Ishikawa diagram)
Conceptual analysis
Mind mapping
Concept mapping
Conceptual graph
Entity-relationship diagram
Tag cloud, also known as word cloud
Statistics
Autocorrelation plot
Bar chart
Biplot
Box plot
Bullet graph
Chernoff faces
Control chart
Fan chart
Forest plot
Funnel plot
Galbraith plot
Histogram
Mosaic plot
Multidimensional scaling
np-chart
p-chart
Pie chart
Probability plot
Normal probability plot
Poincaré plot
Probability plot
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
Data literacy is the ability to read, understand, create, and communicate data as information. Much like literacy as a general concept, data literacy focuses on the competencies involved in working with data. It is, however, not similar to the ability to read text since it requires certain skills involving reading and understanding data.
Background
As data collection and data sharing become routine and data analysis and big data become common ideas in the news, business, government and society, it becomes more and more important for students, citizens, and readers to have some data literacy. The concept is associated with data science, which is concerned with data analysis, usually through automated means, and the interpretation and application of the results.
Data literacy is distinguished from statistical literacy since it involves understanding what data means, including the ability to read graphs and charts as well as draw conclusions from data. Statistical literacy, on the other hand, refers to the "ability to read and interpret summary statistics in everyday media" such as graphs, tables, statements, surveys, and studies.
Role of libraries and librarians
As guides for finding and using information, librarians lead workshops on data literacy for students and researchers, and also work on developing their own data literacy skills.
A set of core competencies and contents that can be used as an adaptable common framework of reference in library instructional programs across institutions and disciplines has been proposed.
Resources created by librarians include MIT's Data Management and Publishing tutorial, the EDINA Research Data Management Training (MANTRA), the University of Edinburgh's Data Library and the University of Minnesota libraries' Data Management Course for Structural Engineers.
See also
Information literacies
Information literacy
Media literacy
Numeracy
Statistical literacy
Transliteracy
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which is a true statement regarding the graphing of data?
A. It is always better to leave data in a table than to graph it.
B. Bar graphs are the best type of graphs for scientific data.
C. For any given set of data, there is only one correct graph or way to display it.
D. Data can be displayed in many types of graphs in order to show different things about the data.
Answer:
|
|
sciq-9424
|
multiple_choice
|
What kind of solid is characterized by an unorganized and unpredictable structure?
|
[
"amorphous",
"aqueous",
"magnetic",
"porous"
] |
A
|
Relavent Documents:
Document 0:::
Solid is one of the four fundamental states of matter along with liquid, gas, and plasma. The molecules in a solid are closely packed together and contain the least amount of kinetic energy. A solid is characterized by structural rigidity (as in rigid bodies) and resistance to a force applied to the surface. Unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. The atoms in a solid are bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice), or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed with little pressure whereas gases can be compressed with little pressure because the molecules in a gas are loosely packed.
The branch of physics that deals with solids is called solid-state physics, and is the main branch of condensed matter physics (which also includes liquids). Materials science is primarily concerned with the physical and chemical properties of solids. Solid-state chemistry is especially concerned with the synthesis of novel materials, as well as the science of identification and chemical composition.
Microscopic description
The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals. In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites, whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline. Almost all common metals, and many ceramics, are polycrystalline.
In other materials, there is no long-range order in the
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Quasi-solid, Falsely-solid, or semisolid is the physical term for something whose state lies between a solid and a liquid. While similar to solids in some respects, such as having the ability to support their own weight and hold their shapes, a quasi-solid also shares some properties of liquids, such as conforming in shape to something applying pressure to it and the ability to flow under pressure. The words quasi-solid, semisolid, and semiliquid may be used interchangeably.
Quasi-solids and semisolids are sometimes described as amorphous because at the microscopic scale they have a disordered structure unlike the more common crystalline solids. They should not be confused with amorphous solids as they are not solids and exhibit properties such as flow which bulk solids do not.
Examples
Pharmaceutical and cosmetic creams, gels, and ointments, e.g. petroleum jelly, toothpaste, hand sanitizer
Foods, e.g. pudding, guacamole, salsa, mayonnaise, whipping cream, peanut butter, jelly, jam
See also
Plasticity (physics)
Viscosity
Premelting
Document 3:::
States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions.
Low-energy states of matter
Classical states
Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other.
Amorphous solid: A solid in which there is no far-range order of the positions of the atoms.
Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order.
Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom.
Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern.
Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure.
Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order.
Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container.
Modern states
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo
Document 4:::
An amorphism, in chemistry, crystallography and, by extension, to other areas of the natural sciences is a substance or feature that lacks an ordered form. In the specific case of crystallography, an amorphic material is one that lacks long range (significant) crystalline order at the molecular level. In the history of chemistry, amorphism was recognised even before the discovery of the nature of the exact atomic crystalline lattice structure. The concept of amorphism can also be found in the fields of art, biology, archaeology and philosophy as a characterisation of objects without form, or with random or unstructured form.
Amorphous and Crystalline solid
In the context of solids, amorphous and crystalline are terms used to describe the structure of materials. Amorphous solids are the opposite of crystalline. The atoms or molecules in amorphous substances are arranged randomly without any long-range order. As a result, they do not have a sharp melting point. The phase transition from solid to liquid occurs over a range of temperatures. Some examples include glass, rubber and some plastics.
See also
Glass
Obsidian
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of solid is characterized by an unorganized and unpredictable structure?
A. amorphous
B. aqueous
C. magnetic
D. porous
Answer:
|
|
sciq-7581
|
multiple_choice
|
What is the layer of electrons that encircle the nucleus at a distinct energy level called?
|
[
"molecular shell",
"electron shell",
"ions shell",
"vortex shell"
] |
B
|
Relavent Documents:
Document 0:::
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the " shell" (also called "K shell"), followed by the " shell" (or "L shell"), then the " shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N...).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.
If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative pot
Document 1:::
In chemistry and atomic physics, an electron shell may be thought of as an orbit that electrons follow around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called the "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond to the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with the letters used in X-ray notation (K, L, M, ...). A useful guide when understanding electron shells in atoms is to note that each row on the conventional periodic table of elements represents an electron shell.
Each shell can contain only a fixed number of electrons: the first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2(n2) electrons. For an explanation of why electrons exist in these shells, see electron configuration.
Each shell consists of one or more subshells, and each subshell consists of one or more atomic orbitals.
History
In 1913 Bohr proposed a model of the atom, giving the arrangement of electrons in their sequential orbits. At that time, Bohr allowed the capacity of the inner orbit of the atom to increase to eight electrons as the atoms got larger, and "in the scheme given below the number of electrons in this [outer] ring is arbitrary put equal to the normal valency of the corresponding element." Using these and other constraints, he proposed configurations that are in accord with those now known only for the first six elements. "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
The shell terminology comes from Arnold Sommerfeld's modification of the 1913 Bohr model. During this period Bohr was working with Walther Kossel, whose papers in 1914 and in 1916 called the or
Document 2:::
The protonosphere is a layer of the Earth's atmosphere (or any planet with a similar atmosphere) where the dominant components are atomic hydrogen and ionic hydrogen (protons). It is the outer part of the ionosphere, and extends to the interplanetary medium. Hydrogen dominates in the outermost layers because it is the lightest gas, and in the heterosphere, mixing is not strong enough to overcome differences in constituent gas densities. Charged particles are created by incoming ionizing radiation, mostly from solar radiation.
Document 3:::
In nuclear physics, atomic physics, and nuclear chemistry, the nuclear shell model is a model of the atomic nucleus that uses the Pauli exclusion principle to describe the structure of nuclei in terms of energy levels. The first shell model was proposed by Dmitri Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Maria Goeppert Mayer and J. Hans D. Jensen, who shared half of the 1963 Nobel Prize in Physics for their contributions.
The nuclear shell model is partly analogous to the atomic shell model, which describes the arrangement of electrons in an atom, in that a filled shell results in better stability. When adding nucleons (protons and neutrons) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation, that there are specific magic quantum numbers of nucleons (2, 8, 20, 28, 50, 82, 126) which are more tightly bound than the following higher number, is the origin of the shell model.
The shells for protons and neutrons are independent of each other. Therefore, there can exist both "magic nuclei", in which one nucleon type or the other is at a magic number, and "doubly magic quantum nuclei", where both are. Due to some variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons, but only 114 for protons, playing a role in the search for the so-called island of stability. Some semi-magic numbers have been found, notably Z = 40, which gives the nuclear shell filling for the various elements; 16 may also be a magic number.
In order to get these numbers, the nuclear shell model starts from an average potential with a shape somewhere between the square well and the harmonic oscillator. To this potential, a spin orbit term is added. Even so, the total perturbation does not coincide with experiment, and an empirical spin orbit coupling must be added with at le
Document 4:::
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals. For example, the electron configuration of the neon atom is , meaning that the 1s, 2s and 2p subshells are occupied by 2, 2 and 6 electrons respectively.
Electronic configurations describe each electron as moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, a level of energy is associated with each electron configuration and in certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together, and for understanding the chemical formulas of compounds and the geometries of molecules. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells
Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons. For example, the first shell can accommodate 2 electrons, the second shell 8 electrons, the third shell 18 electrons and so on. The factor of two arises because the allowed states are doubled due to electron spin—each
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the layer of electrons that encircle the nucleus at a distinct energy level called?
A. molecular shell
B. electron shell
C. ions shell
D. vortex shell
Answer:
|
|
sciq-4008
|
multiple_choice
|
Most scientists think that ordinary matter is less than half of the total matter in the universe; the remaining part includes what mysterious entity?
|
[
"mystery matter",
"magic matter",
"dark matter",
"cold matter"
] |
C
|
Relavent Documents:
Document 0:::
13 Things That Don't Make Sense is a non-fiction book by the British writer Michael Brooks, published in both the UK and the US during 2008.
The British subtitle is "The Most Intriguing Scientific Mysteries of Our Time" while the American is "The Most Baffling..." (see image).
Based on an article Brooks wrote for New Scientist in March 2005, the book, aimed at the general reader rather than the science community, contains discussion and description of a number of unresolved issues in science. It is a literary effort to discuss some of the inexplicable anomalies that after centuries science still cannot completely comprehend.
Chapter 1
The Missing Universe. This chapter deals with astronomy and theoretical physics and the ultimate fate of the universe, in particular the search for understanding of dark matter and dark energy and includes discussion of:
The work of astronomers Vesto Slipher and then Edwin Hubble in demonstrating the universe is expanding;
Vera Rubin's investigation of galaxy rotation curves that suggest something other than gravity is preventing galaxies from spinning apart, which led to the revival of unobserved "dark matter" theory;
Experimental efforts to discover dark matter, including the search for the hypothetical neutralino and other weakly interacting massive particles);
The study of supernovae at Lawrence Berkeley National Laboratory and Harvard University (under Robert Kirshner) that point to an accelerating universe powered by "dark energy" possibly vacuum energy;
The assertion that the proposed modified Newtonian dynamics hypothesis and the accelerating universe disproves the dark matter theory.
Chapter 2
The Pioneer Anomaly. This discusses the Pioneer 10 and Pioneer 11 space probes, which appear to be veering off course and drifting towards the sun. At the time of writing of the book there was a growing speculation as to whether this phenomenon could be explained by a yet-undetermined fault in the rockets' systems or wheth
Document 1:::
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
Ancient Greece
Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Hong Kong
High schools
In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE).
Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi
Document 2:::
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th
Document 3:::
The SLAC Theory Group is the hub of theoretical particle physics research at the SLAC National Accelerator Laboratory at Stanford University. It is a subdivision of the Elementary Particle Physics (EPP) Division at SLAC.
Research
The group has a diverse research program, specializing in areas of quantum field theory, beyond the standard model physics, dark matter, neutrinos, and collider phenomenology.
Members
The group is currently led by 9 faculty members, and has a dozen postdoctoral researchers and students at any given time.
Notable physicists who were students or postdoctoral researchers in the SLAC Theory Group include Nima Arkani-Hamed, Thomas Appelquist, Mirjam Cvetic, Michael Dine, John Ellis, Rouven Essig, Edward Farhi, Steven Frautschi, Joshua Frieman, Roscoe Giles, Yuval Grossman, Jack F. Gunion, Alan Guth, Howard Haber, Claude Itzykson, Robert Jaffe, David E. Kaplan, Igor Klebanov, Peter Lepage, Christopher Llewellyn Smith, Kirill Melnikov, Stephen Parke, Maxim Perelstein, Joel Primack, Joseph Polchinski, Davison Soper, Henry Tye, Mark Wise, and Tung-Mow Yan.
Past and present members of the SLAC Theory Group have received a total of at least 3 Breakthrough in Fundamental Physics Prizes ($3 million USD prize), 10 Sakurai Prizes ($10,000 USD), 5 Dirac Medals ($5,000 USD), 4 New Horizons in Physics Prizes ($100,000 USD), and 2 Gribov Medals ($5,000 USD).
Faculty
Current and former faculty members in the SLAC Theory Group include:
James Bjorken, discoverer of Bjorken Scaling (light-cone scaling) and Bjorken Sum Rule, 2004 Dirac Medal recipient
Stanley Brodsky, 2007 Sakurai Prize recipient for applications of perturbative quantum field theory to the analysis of hard exclusive strong interaction processes
Lance Dixon, pioneer of new methods to calculate Feynman diagrams in quantum chromodynamics and other Yang–Mills theories; 2014 recipient of the Sakurai Prize and 2023 recipient of the Galileo Medal
Sidney Drell, known for his contributions to
Document 4:::
Stable massive particles (SMPs) are hypothetical particles that are long-lived and have appreciable mass. The precise definition varies depending on the different experimental or observational searches. SMPs may be defined as being at least as massive as electrons, and not decaying during its passage through a detector. They can be neutral or charged or carry a fractional charge, and interact with matter through gravitational force, strong force, weak force, electromagnetic force or any unknown force.
If new SMPs are ever discovered, several questions related to the origin and constituent of dark matter, and about the unification of four fundamental forces may be answered.
Collider experiments
Heavy, exotic particles interacting with matter and which can be directly detected through collider experiments are termed as stable massive particles or SMPs. More specifically a SMP is defined to be a particle that can pass through a detector without decaying and can undergo electromagnetic or strong interaction with matter. Searches for SMPs have been carried out across a spectrum of collision experiments such as lepton–hadron, hadron–hadron, and electron–positron. Although none of these experiments have detected an SMP, they have put substantial constraints on the nature of SMPs.
ATLAS Experiment
During the proton–proton collisions with center of mass energy equal to 13 TeV at the ATLAS experiment, a search for charged SMPs was carried out. In this case SMPs were defined as particles with mass significantly more than that of standard model particles, sufficient lifetime to reach the ATLAS hadronic calorimeter and with measurable electric charge while it passes through the tracking chambers.
MoEDAL experiment
The MoEDAL experiment search for, among others, highly ionizing SMPs and pseudo-SMPs.
Non-collider experiments
In the case of the non-collider experiments, SMPs are defined as sufficiently long-lived particles which exist either as relics of the big bang sin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most scientists think that ordinary matter is less than half of the total matter in the universe; the remaining part includes what mysterious entity?
A. mystery matter
B. magic matter
C. dark matter
D. cold matter
Answer:
|
|
sciq-10780
|
multiple_choice
|
A _______ of biology is a fundamental concept that is just as true for a bee or a sunflower as it is for us.
|
[
"theory",
"notion",
"hypothesis",
"principle"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A _______ of biology is a fundamental concept that is just as true for a bee or a sunflower as it is for us.
A. theory
B. notion
C. hypothesis
D. principle
Answer:
|
|
sciq-6537
|
multiple_choice
|
What is another common term for single-unit smooth muscle?
|
[
"energies muscle",
"visceral muscle",
"lateral muscle",
"abnormal muscle"
] |
B
|
Relavent Documents:
Document 0:::
In biomechanics, Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill, who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model.
AV Hill's force-velocity equation for tetanized muscle
This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction. It relates tension to velocity with regard to the internal thermodynamics. The equation is
where
is the tension (or load) in the muscle
is the velocity of contraction
is the maximum isometric tension (or load) generated in the muscle
coefficient of shortening heat
is the maximum velocity, when
Although Hill's equation looks very much like the van der Waals equation, the former has units of energy dissipation, while the latter has units of energy. Hill's equation demonstrates that the relationship between F and v is hyperbolic. Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting
Document 1:::
In an isotonic contraction, tension remains the same, whilst the muscle's length changes. Isotonic contractions differ from isokinetic contractions in that in isokinetic contractions the muscle speed remains constant. While superficially identical, as the muscle's force changes via the length-tension relationship during a contraction, an isotonic contraction will keep force constant while velocity changes, but an isokinetic contraction will keep velocity constant while force changes. A near isotonic contraction is known as Auxotonic contraction.
There are two types of isotonic contractions: (1) concentric and (2) eccentric. In a concentric contraction, the muscle tension rises to meet the resistance, then remains the same as the muscle shortens. In eccentric, the muscle lengthens due to the resistance being greater than the force the muscle is producing.
Concentric
This type is typical of most exercise. The external force on the muscle is less than the force the muscle is generating - a shortening contraction. The effect is not visible during the classic biceps curl, which is in fact auxotonic because the resistance (torque due to the weight being lifted) does not remain the same through the exercise. Tension is highest at a parallel to the floor level, and eases off above and below this point. Therefore, tension changes as well as muscle length.
Eccentric
There are two main features to note regarding eccentric contractions. First, the absolute tensions achieved can be very high relative to the muscle's maximum tetanic tension generating capacity (you can set down a much heavier object than you can lift). Second, the absolute tension is relatively independent of lengthening velocity.
Muscle injury and soreness are selectively associated with eccentric contraction. Muscle strengthening using exercises that involve eccentric contractions is lower than using concentric exercises. However because higher levels of tension are easier to attain during exercises th
Document 2:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 3:::
Anatomical terminology is used to uniquely describe aspects of skeletal muscle, cardiac muscle, and smooth muscle such as their actions, structure, size, and location.
Types
There are three types of muscle tissue in the body: skeletal, smooth, and cardiac.
Skeletal muscle
Skeletal muscle, or "voluntary muscle", is a striated muscle tissue that primarily joins to bone with tendons. Skeletal muscle enables movement of bones, and maintains posture. The widest part of a muscle that pulls on the tendons is known as the belly.
Muscle slip
A muscle slip is a slip of muscle that can either be an anatomical variant, or a branching of a muscle as in rib connections of the serratus anterior muscle.
Smooth muscle
Smooth muscle is involuntary and found in parts of the body where it conveys action without conscious intent. The majority of this type of muscle tissue is found in the digestive and urinary systems where it acts by propelling forward food, chyme, and feces in the former and urine in the latter. Other places smooth muscle can be found are within the uterus, where it helps facilitate birth, and the eye, where the pupillary sphincter controls pupil size.
Cardiac muscle
Cardiac muscle is specific to the heart. It is also involuntary in its movement, and is additionally self-excitatory, contracting without outside stimuli.
Actions of skeletal muscle
As well as anatomical terms of motion, which describe the motion made by a muscle, unique terminology is used to describe the action of a set of muscles.
Agonists and antagonists
Agonist muscles and antagonist muscles are muscles that cause or inhibit a movement.
Agonist muscles are also called prime movers since they produce most of the force, and control of an action. Agonists cause a movement to occur through their own activation. For example, the triceps brachii contracts, producing a shortening (concentric) contraction, during the up phase of a push-up (elbow extension). During the down phase of a push-up, th
Document 4:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is another common term for single-unit smooth muscle?
A. energies muscle
B. visceral muscle
C. lateral muscle
D. abnormal muscle
Answer:
|
|
sciq-595
|
multiple_choice
|
What term describes an imbalance of attractive forces between liquid molecules at the surface of a liquid?
|
[
"obsolute tension",
"surface tension",
"molecular tension",
"currents tension"
] |
B
|
Relavent Documents:
Document 0:::
Surface tension is the tendency of liquid surfaces at rest to shrink into the minimum surface area possible. Surface tension is what allows objects with a higher density than water such as razor blades and insects (e.g. water striders) to float on a water surface without becoming even partly submerged.
At liquid–air interfaces, surface tension results from the greater attraction of liquid molecules to each other (due to cohesion) than to the molecules in the air (due to adhesion).
There are two primary mechanisms in play. One is an inward force on the surface molecules causing the liquid to contract. Second is a tangential force parallel to the surface of the liquid. This tangential force is generally referred to as the surface tension. The net effect is the liquid behaves as if its surface were covered with a stretched elastic membrane. But this analogy must not be taken too far as the tension in an elastic membrane is dependent on the amount of deformation of the membrane while surface tension is an inherent property of the liquid–air or liquid–vapour interface.
Because of the relatively high attraction of water molecules to each other through a web of hydrogen bonds, water has a higher surface tension (72.8 millinewtons (mN) per meter at 20 °C) than most other liquids. Surface tension is an important factor in the phenomenon of capillarity.
Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the term surface energy, which is a more general term in the sense that it applies also to solids.
In materials science, surface tension is used for either surface stress or surface energy.
Causes
Due to the cohesive forces, a molecule located away from the surface is pulled equally in every direction by neighboring liquid molecules, resulting in a net force of zero. The molecules at the surface do not have the same molecules on all sides of th
Document 1:::
The Szyszkowski Equation has been used by Meissner and Michaels to describe the decrease in surface tension of aqueous solutions of carboxylic acids, alcohols and esters at varying mole fractions. It describes the exponential decrease of the surface tension at low concentrations reasonably but should be used only at concentrations below 1 mole%.
Equation
with:
σm is surface tension of the mixture
σw is surface tension of pure water
a is component specific constant (see table below)
x is mole fraction of the solvated component
The equation can be rearranged to be explicit in a:
This allows the direct calculation of that component specific parameter a from experimental data.
The equation can also be written as:
with:
γ is surface tension of the mixture
γ0 is surface tension of pure water
R is ideal gas constant 8.31 J/(mol*K)
T is temperature in K
ω is cross-sectional area of the surfactant molecules at the surface
The surface tension of pure water is dependent on temperature. At room temperature (298 K), it is equal to 71.97 mN/m
Parameters
Meissner and Michaels published the following a constants:
Example
The following table and diagram show experimentally determined surface tensions in the mixture of water and propionic acid.
This example shows a good agreement between the published value a=2.6*10−3 and the calculated value a=2.59*10−3 at the smallest given mole fraction of 0.00861 but at higher concentrations of propionic acid the value of an increases considerably, showing deviations from the predicted value.
See also
Bohdan Szyszkowski
Document 2:::
In physics, the Young–Laplace equation () is an algebraic equation that describes the capillary pressure difference sustained across the interface between two static fluids, such as water and air, due to the phenomenon of surface tension or wall tension, although use of the latter is only applicable if assuming that the wall is very thin. The Young–Laplace equation relates the pressure difference to the shape of the surface or wall and it is fundamentally important in the study of static capillary surfaces. It's a statement of normal stress balance for static fluids meeting at an interface, where the interface is treated as a surface (zero thickness):
where is the Laplace pressure, the pressure difference across the fluid interface (the exterior pressure minus the interior pressure), is the surface tension (or wall tension), is the unit normal pointing out of the surface, is the mean curvature, and and are the principal radii of curvature. Note that only normal stress is considered, this is because it has been shown that a static interface is possible only in the absence of tangential stress.
The equation is named after Thomas Young, who developed the qualitative theory of surface tension in 1805, and Pierre-Simon Laplace who completed the mathematical description in the following year. It is sometimes also called the Young–Laplace–Gauss equation, as Carl Friedrich Gauss unified the work of Young and Laplace in 1830, deriving both the differential equation and boundary conditions using Johann Bernoulli's virtual work principles.
Soap films
If the pressure difference is zero, as in a soap film without gravity, the interface will assume the shape of a minimal surface.
Emulsions
The equation also explains the energy required to create an emulsion. To form the small, highly curved droplets of an emulsion, extra energy is required to overcome the large pressure that results from their small radius.
The Laplace pressure, which is greater for smaller droplets,
Document 3:::
An ideal solid surface is flat, rigid, perfectly smooth, and chemically homogeneous, and has zero contact angle hysteresis. Zero hysteresis implies the advancing and receding contact angles are equal.
In other words, only one thermodynamically stable contact angle exists. When a drop of liquid is placed on such a surface, the characteristic contact angle is formed as depicted in Fig. 1. Furthermore, on an ideal surface, the drop will return to its original shape if it is disturbed. The following derivations apply only to ideal solid surfaces; they are only valid for the state in which the interfaces are not moving and the phase boundary line exists in equilibrium.
Minimization of energy, three phases
Figure 3 shows the line of contact where three phases meet. In equilibrium, the net force per unit length acting along the boundary line between the three phases must be zero. The components of net force in the direction along each of the interfaces are given by:
where α, β, and θ are the angles shown and γij is the surface energy between the two indicated phases. These relations can also be expressed by an analog to a triangle known as Neumann’s triangle, shown in Figure 4. Neumann’s triangle is consistent with the geometrical restriction that , and applying the law of sines and law of cosines to it produce relations that describe how the interfacial angles depend on the ratios of surface energies.
Because these three surface energies form the sides of a triangle, they are constrained by the triangle inequalities, γij < γjk + γik meaning that no one of the surface tensions can exceed the sum of the other two. If three fluids with surface energies that do not follow these inequalities are brought into contact, no equilibrium configuration consistent with Figure 3 will exist.
Simplification to planar geometry, Young's relation
If the β phase is replaced by a flat rigid surface, as shown in Figure 5, then β = π, and the second net force equation simplifies to the Y
Document 4:::
Surface force denoted fs is the force that acts across an internal or external surface element in a material body.
Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces.
Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area.
Equations for surface force
Surface force due to pressure
, where f = force, p = pressure, and A = area on which a uniform pressure acts
Examples
Pressure related surface force
Since pressure is , and area is a ,
a pressure of over an area of will produce a surface force of .
See also
Body force
Contact force
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term describes an imbalance of attractive forces between liquid molecules at the surface of a liquid?
A. obsolute tension
B. surface tension
C. molecular tension
D. currents tension
Answer:
|
|
sciq-9028
|
multiple_choice
|
The electric and magnetic fields are closely related and propagate as what?
|
[
"thermal energy",
"electromagnetic wave",
"sound wave",
"mechanical wave"
] |
B
|
Relavent Documents:
Document 0:::
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
Vector field approach
The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
Maxwell's equations in the vector field approach
The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell-Heaviside's equations:
{| class="toccolours collapsible" width="400px" style="background-color:#ECFCF4; padding:6; cellpadding=6;text-align:left;border:2px solid #50C878"
|-
|text-align="center" colspan="2"|Maxwell's equations (vector fields)
|-
| || Gauss's law
|-
| || Gauss's law for magnetism
|-
| || Faraday's law
|-
| || Ampère–Maxwell law
|}
where ρ is the charge density, which can (and often does) depend on time and position, ε0 is the electric constant, μ0 is the magnetic constant, and J is the current per unit area, also a function of time and position. The equations
Document 1:::
Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law and Lorentz transformations.
Electromechanics
After Maxwell proposed the differential equation model of the electromagnetic field in 1873, the mechanism of action of fields came into question, for instance in the Kelvin’s master class held at Johns Hopkins University in 1884 and commemorated a century later.
The requirement that the equations remain consistent when viewed from various moving observers led to special relativity, a geometric theory of 4-space where intermediation is by light and radiation. The spacetime geometry provided a context for technical description of electric technology, especially generators, motors, and lighting at first. The Coulomb force was generalized to the Lorentz force. For example, with this model transmission lines and power grids were developed and radio frequency communication explored.
An effort to mount a full-fledged electromechanics on a relativistic basis is seen in the work of Leigh Page, from the project outline in 1912 to his textbook Electrodynamics (1940) The interplay (according to the differential equations) of electric and magnetic field as viewed over moving observers is examined. What is charge density in electrostatics becomes proper charge density and generates a magnetic field for a moving observer.
A revival of interest in this method for education and training of electrical and electronics engineers broke out in the 1960s after Richard Feynman’s textbook.
Rosser’s book Classical Electromagnetism via Relativity was popular, as was Anthony French’s treatment in his textbook which illustrated diagrammatically the proper charge density. One author proclaimed, "Maxwell — Out of Newton, Coulomb, and Einstein".
The use of retarded potentials to describe electromagnetic fields from source-charges is an expression of relativistic electromagnetism.
Principle
The question of how an electric field
Document 2:::
In physics, the electromagnetic dual concept is based on the idea that, in the static case, electromagnetism has two separate facets: electric fields and magnetic fields. Expressions in one of these will have a directly analogous, or dual, expression in the other. The reason for this can ultimately be traced to special relativity, where applying the Lorentz transformation to the electric field will transform it into a magnetic field. These are special cases of duality in mathematics.
The electric field () is the dual of the magnetic field ().
The electric displacement field () is the dual of the magnetic flux density ().
Faraday's law of induction is the dual of Ampère's circuital law.
Gauss's law for electric field is the dual of Gauss's law for magnetism.
The electric potential is the dual of the magnetic potential.
Permittivity is the dual of permeability.
Electrostriction is the dual of magnetostriction.
Piezoelectricity is the dual of piezomagnetism.
Ferroelectricity is the dual of ferromagnetism.
An electrostatic motor is the dual of a magnetic motor;
Electrets are the dual of permanent magnets;
The Faraday effect is the dual of the Kerr effect;
The Aharonov–Casher effect is the dual to the Aharonov–Bohm effect;
The hypothetical magnetic monopole is the dual of electric charge.
See also
Maxwell's equations
Duality (electrical circuits)
List of dualities
Electromagnetism
Duality theories
Document 3:::
In physics, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. For example, on a weather map, the surface temperature is described by assigning a number to each point on the map; the temperature can be considered at a certain point in time or over some interval of time, to study the dynamics of temperature change. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field.
In the modern framework of the quantum theory of fields, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. "The fact that the electromagnetic field can possess momentum and energy makes it very real ... a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e., they follow Gauss's law).
A field can be classified as a scalar field, a vector field, a spinor f
Document 4:::
The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field or the magnetic field , takes the form:
where
is the speed of light (i.e. phase velocity) in a medium with permeability , and permittivity , and is the Laplace operator. In a vacuum, , a fundamental physical constant. The electromagnetic wave equation derives from Maxwell's equations. In most older literature, is called the magnetic flux density or magnetic induction. The following equationspredicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation.
The origin of the electromagnetic wave equation
In his 1865 paper titled A Dynamical Theory of the Electromagnetic Field, James Clerk Maxwell utilized the correction to Ampère's circuital law that he had made in part III of his 1861 paper On Physical Lines of Force. In Part VI of his 1864 paper titled Electromagnetic Theory of Light, Maxwell combined displacement current with some of the other equations of electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He commented:
The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.
Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics education by a much less cumbersome method involving combining the corrected version of Ampère's circuital law with Faraday's law of induction.
To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The electric and magnetic fields are closely related and propagate as what?
A. thermal energy
B. electromagnetic wave
C. sound wave
D. mechanical wave
Answer:
|
|
sciq-2669
|
multiple_choice
|
What occurs after gametes fuse and form a diploid zygote?
|
[
"reproduction",
"meiosis",
"electrolysis",
"transcription"
] |
B
|
Relavent Documents:
Document 0:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 1:::
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid ce
Document 2:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 3:::
Fertilisation or fertilization (see spelling differences), also known as generative fertilisation, syngamy and impregnation, is the fusion of gametes to give rise to a new individual organism or offspring and initiate its development. While processes such as insemination or pollination which happen before the fusion of gametes are also sometimes informally referred to as fertilisation, these are technically separate processes. The cycle of fertilisation and development of new individuals is called sexual reproduction. During double fertilisation in angiosperms the haploid male gamete combines with two haploid polar nuclei to form a triploid primary endosperm nucleus by the process of vegetative fertilisation.
History
In Antiquity, Aristotle conceived the formation of new individuals through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic.
In 1784, Spallanzani established the need of interaction between the female's ovum and male's sperm to form a zygote in frogs. In 1827, von Baer observed a therian mammalian egg for the first time. Oscar Hertwig (1876), in Germany, described the fusion of nuclei of spermatozoa and of ova from sea urchin.
Evolution
The evolution of fertilisation is related to the origin of meiosis, as both are part of sexual reproduction, originated in eukaryotes. One theory states that meiosis originated from mitosis.
Fertilisation in plants
The gametes that participate in fertilisation of plants are the sperm (male) and the egg (female) cell. Various families of plants have differing methods by which the gametes produced by the male and female gametophytes come together and are fertilised. In Bryophyte land plants, fertilisation of the sperm and egg takes place within the archegonium. In seed plants, the male gametophyte is called a pollen grain. After pollination, the pollen grain germinates, and a pollen tube grows and penetrates the ovule through a tiny pore called a mic
Document 4:::
Karyogamy is the final step in the process of fusing together two haploid eukaryotic cells, and refers specifically to the fusion of the two nuclei. Before karyogamy, each haploid cell has one complete copy of the organism's genome. In order for karyogamy to occur, the cell membrane and cytoplasm of each cell must fuse with the other in a process known as plasmogamy. Once within the joined cell membrane, the nuclei are referred to as pronuclei. Once the cell membranes, cytoplasm, and pronuclei fuse, the resulting single cell is diploid, containing two copies of the genome. This diploid cell, called a zygote or zygospore can then enter meiosis (a process of chromosome duplication, recombination, and division, to produce four new haploid cells), or continue to divide by mitosis. Mammalian fertilization uses a comparable process to combine haploid sperm and egg cells (gametes) to create a diploid fertilized egg.
The term karyogamy comes from the Greek karyo- (from κάρυον karyon) 'nut' and γάμος gamos 'marriage'.
Importance in haploid organisms
Haploid organisms such as fungi, yeast, and algae can have complex cell cycles, in which the choice between sexual or asexual reproduction is fluid, and often influenced by the environment. Some organisms, in addition to their usual haploid state, can also exist as diploid for a short time, allowing genetic recombination to occur. Karyogamy can occur within either mode of reproduction: during the sexual cycle or in somatic (non-reproductive) cells.
Thus, karyogamy is the key step in bringing together two sets of different genetic material which can recombine during meiosis. In haploid organisms that lack sexual cycles, karyogamy can also be an important source of genetic variation during the process of forming somatic diploid cells. Formation of somatic diploids circumvents the process of gamete formation during the sexual reproduction cycle and instead creates variation within the somatic cells of an already developed organ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs after gametes fuse and form a diploid zygote?
A. reproduction
B. meiosis
C. electrolysis
D. transcription
Answer:
|
|
sciq-3388
|
multiple_choice
|
What begins approximately six weeks after fertilization in an embryo?
|
[
"ossification",
"elongation",
"calcification",
"incubation"
] |
A
|
Relavent Documents:
Document 0:::
Human embryonic development, or human embryogenesis, is the development and formation of the human embryo. It is characterised by the processes of cell division and cellular differentiation of the embryo that occurs during the early stages of development. In biological terms, the development of the human body entails growth from a one-celled zygote to an adult human being. Fertilization occurs when the sperm cell successfully enters and fuses with an egg cell (ovum). The genetic material of the sperm and egg then combine to form the single cell zygote and the germinal stage of development commences. Embryonic development in the human, covers the first eight weeks of development; at the beginning of the ninth week the embryo is termed a fetus.
The eight weeks has 23 stages.
Human embryology is the study of this development during the first eight weeks after fertilization. The normal period of gestation (pregnancy) is about nine months or 40 weeks.
The germinal stage refers to the time from fertilization through the development of the early embryo until implantation is completed in the uterus. The germinal stage takes around 10 days. During this stage, the zygote begins to divide, in a process called cleavage. A blastocyst is then formed and implants in the uterus. Embryogenesis continues with the next stage of gastrulation, when the three germ layers of the embryo form in a process called histogenesis, and the processes of neurulation and organogenesis follow.
In comparison to the embryo, the fetus has more recognizable external features and a more complete set of developing organs. The entire process of embryogenesis involves coordinated spatial and temporal changes in gene expression, cell growth and cellular differentiation. A nearly identical process occurs in other species, especially among chordates.
Germinal stage
Fertilization
Fertilization takes place when the spermatozoon has successfully entered the ovum and the two sets of genetic material carried b
Document 1:::
Development of the human body is the process of growth to maturity. The process begins with fertilization, where an egg released from the ovary of a female is penetrated by a sperm cell from a male. The resulting zygote develops through mitosis and cell differentiation, and the resulting embryo then implants in the uterus, where the embryo continues development through a fetal stage until birth. Further growth and development continues after birth, and includes both physical and psychological development that is influenced by genetic, hormonal, environmental and other factors. This continues throughout life: through childhood and adolescence into adulthood.
Before birth
Development before birth, or prenatal development () is the process in which a zygote, and later an embryo, and then a fetus develops during gestation. Prenatal development starts with fertilization and the formation of the zygote, the first stage in embryonic development which continues in fetal development until birth.
Fertilization
Fertilization occurs when the sperm successfully enters the ovum's membrane. The chromosomes of the sperm are passed into the egg to form a unique genome. The egg becomes a zygote and the germinal stage of embryonic development begins. The germinal stage refers to the time from fertilization, through the development of the early embryo, up until implantation. The germinal stage is over at about 10 days of gestation.
The zygote contains a full complement of genetic material with all the biological characteristics of a single human being, and develops into the embryo. Embryonic development has four stages: the morula stage, the blastula stage, the gastrula stage, and the neurula stage. Prior to implantation, the embryo remains in a protein shell, the zona pellucida, and undergoes a series of rapid mitotic cell divisions called cleavage. A week after fertilization the embryo still has not grown in size, but hatches from the zona pellucida and adheres to the lining o
Document 2:::
In embryology, Carnegie stages are a standardized system of 23 stages used to provide a unified developmental chronology of the vertebrate embryo.
The stages are delineated through the development of structures, not by size or the number of days of development, and so the chronology can vary between species, and to a certain extent between embryos. In the human being only the first 60 days of development are covered; at that point, the term embryo is usually replaced with the term fetus.
It was based on work by Streeter (1942) and O'Rahilly and Müller (1987). The name "Carnegie stages" comes from the Carnegie Institution of Washington.
While the Carnegie stages provide a universal system for staging and comparing the embryonic development of most vertebrates, other systems are occasionally used for the common model organisms in developmental biology, such as the Hamburger–Hamilton stages in the chick.
Stages
Days are approximate and reflect the days since the last ovulation before pregnancy ("Postovulatory age").
Stage 1: 1 days
fertilization
polar bodies
Carnegie stage 1 is the unicellular embryo. This stage is divided into three substages.
Stage 1 a
Primordial embryo. All the genetic material necessary for a new individual, along with some redundant chromosomes, are present within a single plasmalemma. Penetration of the fertilising sperm allows the oocyte to resume meiosis and the polar body is extruded.
Stage 1 b
Pronuclear embryo. Two separate haploid components are present - the maternal and paternal pronuclei. The pronuclei move towards each other and eventually compress their envelopes where they lie adjacent near the centre of the wall.
Stage 1 c
Syngamic embryo. The last phase of fertilisation. The pronuclear envelopes disappear and the parental chromosomes come together in a process called syngamy.
Stage 2: 2-3 days
cleavage
morula
compaction
Carnegie stage 2 begins when the zygote undergoes its first cell division, and ends when the blas
Document 3:::
In biology, a blastomere is a type of cell produced by cell division (cleavage) of the zygote after fertilization; blastomeres are an essential part of blastula formation, and blastocyst formation in mammals.
Human blastomere characteristics
In humans, blastomere formation begins immediately following fertilization and continues through the first week of embryonic development. About 90 minutes after fertilization, the zygote divides into two cells. The two-cell blastomere state, present after the zygote first divides, is considered the earliest mitotic product of the fertilized oocyte. These mitotic divisions continue and result in a grouping of cells called blastomeres. During this process, the total size of the embryo does not increase, so each division results in smaller and smaller cells. When the zygote contains 16 to 32 blastomeres it is referred to as a morula. These are the preliminary stages in the embryo beginning to form. Once this begins, microtubules within the morula's cytosolic material in the blastomere cells can develop into important membrane functions, such as sodium pumps. These pumps allow the inside of the embryo to fill with blastocoelic fluid, which supports the further growth of life.
The blastomere is considered totipotent; that is, blastomeres are capable of developing from a single cell into a fully fertile adult organism. This has been demonstrated through studies and conjectures made with mouse blastomeres, which have been accepted as true for most mammalian blastomeres as well. Studies have analyzed monozygotic twin mouse blastomeres in their two-cell state, and have found that when one of the twin blastomeres is destroyed, a fully fertile adult mouse can still develop. Thus, it can be assumed that since one of the twin cells was totipotent, the destroyed one originally was as well.
Relative blastomere size within the embryo is dependent not only on the stage of the cleavage, but also on the regularity of the cleavage amongst t
Document 4:::
In developmental biology, animal embryonic development, also known as animal embryogenesis, is the developmental stage of an animal embryo. Embryonic development starts with the fertilization of an egg cell (ovum) by a sperm cell, (spermatozoon). Once fertilized, the ovum becomes a single diploid cell known as a zygote. The zygote undergoes mitotic divisions with no significant growth (a process known as cleavage) and cellular differentiation, leading to development of a multicellular embryo after passing through an organizational checkpoint during mid-embryogenesis. In mammals, the term refers chiefly to the early stages of prenatal development, whereas the terms fetus and fetal development describe later stages.
The main stages of animal embryonic development are as follows:
The zygote undergoes a series of cell divisions (called cleavage) to form a structure called a morula.
The morula develops into a structure called a blastula through a process called blastulation.
The blastula develops into a structure called a gastrula through a process called gastrulation.
The gastrula then undergoes further development, including the formation of organs (organogenesis).
The embryo then transforms into the next stage of development, the nature of which varies between different animal species (examples of possible next stages include a fetus and a larva).
Fertilization and the zygote
The egg cell is generally asymmetric, having an animal pole (future ectoderm).
It is covered with protective envelopes, with different layers. The first envelope – the one in contact with the membrane of the egg – is made of glycoproteins and is known as the vitelline membrane (zona pellucida in mammals). Different taxa show different cellular and acellular envelopes englobing the vitelline membrane.
Fertilization is the fusion of gametes to produce a new organism. In animals, the process involves a sperm fusing with an ovum, which eventually leads to the development of an embryo. Depen
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What begins approximately six weeks after fertilization in an embryo?
A. ossification
B. elongation
C. calcification
D. incubation
Answer:
|
|
scienceQA-3242
|
multiple_choice
|
What do these two changes have in common?
melting wax
dust settling out of the air
|
[
"Both are caused by heating.",
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by cooling."
] |
B
|
Step 1: Think about each change.
Melting wax is a change of state. So, it is a physical change. The wax changes from solid to liquid. But it is still made of the same type of matter.
Dust settling out of the air is a physical change. As the dust settles, or falls, it might land on furniture or the ground. This separates dust particles from the air, but does not form a different type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Wax melting is caused by heating. But dust settling out of the air is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
Document 3:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 4:::
A combustible material is a material that can burn (i.e., sustain a flame) in air under certain conditions. A material is flammable if it ignites easily at ambient temperatures. In other words, a combustible material ignites with some effort and a flammable material catches fire immediately on exposure to flame.
The degree of flammability in air depends largely upon the volatility of the material - this is related to its composition-specific vapour pressure, which is temperature dependent. The quantity of vapour produced can be enhanced by increasing the surface area of the material forming a mist or dust. Take wood as an example. Finely divided wood dust can undergo explosive flames and produce a blast wave. A piece of paper (made from wood) catches on fire quite easily. A heavy oak desk is much harder to ignite, even though the wood fibre is the same in all three materials.
Common sense (and indeed scientific consensus until the mid-1700s) would seem to suggest that material "disappears" when burned, as only the ash is left. In fact, there is an increase in weight because the flammable material reacts (or combines) chemically with oxygen, which also has mass. The original mass of flammable material and the mass of the oxygen required for flames equals the mass of the flame products (ash, water, carbon dioxide, and other gases). Antoine Lavoisier, one of the pioneers in these early insights, stated that Nothing is lost, nothing is created, everything is transformed, which would later be known as the law of conservation of mass. Lavoisier used the experimental fact that some metals gained mass when they burned to support his ideas.
Definitions
Historically, flammable, inflammable and combustible meant capable of burning. The word "inflammable" came through French from the Latin inflammāre = "to set fire to", where the Latin preposition "in-" means "in" as in "indoctrinate", rather than "not" as in "invisible" and "ineligible".
The word "inflammable" may be er
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
melting wax
dust settling out of the air
A. Both are caused by heating.
B. Both are only physical changes.
C. Both are chemical changes.
D. Both are caused by cooling.
Answer:
|
sciq-4021
|
multiple_choice
|
In prokaryotes, what is composed of a single, double-stranded dna molecule in the form of a loop or circle?
|
[
"allele",
"rNA",
"chromosomes",
"genome"
] |
D
|
Relavent Documents:
Document 0:::
What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"
In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.
Background
The book, published i
Document 1:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 2:::
k = 1
When k = 1, there are four DNA k-mers, i.e., A, T, G, and C. At the molecular level, there
Document 3:::
DNA: The Story of Life is a four-part Channel 4 documentary series on the discovery of DNA, broadcast in 2003.
The series was broadcast to celebrate fifty years since the 1953 discovery. The first episode was broadcast on Saturday March 8 2003 at 7pm.
Episodes
Episode 1 - The Secret of Life
It covered the discovery of DNA in 1953. Maurice Wilkins and his involvement with the Manhattan Project, speaking in his university office in London; Linus Pauling's son Peter, of Caltech, now lived in Wales; Linus Pauling approached the discovery of the structure of DNA in a much more methodical rigid manner, perhaps in a plodding way, and Pauling was never one to take the same un-thought-through reckless gambles that Watson and Crick would take; but those ambitious reckless gambles of Watson and Crick would find the structure of DNA; the 1974 BBC documentary The Race for the Double Helix; Watson attended a lecture on the latest X-ray data on DNA in London in November 1951, with the project in Cambridge later producing their first DNA model on 28 November 1951; Sir John Randall, head of the London project, telephoned Lawrence Bragg in Cambridge, with his displeasure at how Watson and Crick had borrowed London's DNA structure X-ray data, which resulted in Watson and Crick being chastened, and removed from their work on DNA structure at Cambridge; but at the London project, events were being often undermined by frosty wooden relationships, and a complete lack of human empathy, as believed Raymond Gosling; at Cambridge, biochemist Erwin Chargaff, of Columbia University, had dinner with Watson and Crick, and although he largely disliked the pair, he explained his Chargaff's rules to them, where equal amounts of adenine and thymine had been found, which had applied to all living cells; Linus Pauling writes to Wilkins, asking for recent X-ray photographs, but is unlucky; on 6 May 1952, the London project takes Photo 51, which indicated a helix structure; in December 1952, Linus Pa
Document 4:::
A DNA machine is a molecular machine constructed from DNA. Research into DNA machines was pioneered in the late 1980s by Nadrian Seeman and co-workers from New York University. DNA is used because of the numerous biological tools already found in nature that can affect DNA, and the immense knowledge of how DNA works previously researched by biochemists.
DNA machines can be logically designed since DNA assembly of the double helix is based on strict rules of base pairing that allow portions of the strand to be predictably connected based on their sequence. This "selective stickiness" is a key advantage in the construction of DNA machines.
An example of a DNA machine was reported by Bernard Yurke and co-workers at Lucent Technologies in the year 2000, who constructed molecular tweezers out of DNA.
The DNA tweezers contain three strands: A, B and C. Strand A latches onto half of strand B and half of strand C, and so it joins them all together. Strand A acts as a hinge so that the two "arms" — AB and AC — can move. The structure floats with its arms open wide. They can be pulled shut by adding a fourth strand of DNA (D) "programmed" to stick to both of the dangling, unpaired sections of strands B and C. The closing of the tweezers was proven by tagging strand A at either end with light-emitting molecules that do not emit light when they are close together. To re-open the tweezers add a further strand (E) with the right sequence to pair up with strand D. Once paired up, they have no connection to the machine BAC, so float away. The DNA machine can be opened and closed repeatedly by cycling between strands D and E. These tweezers can be used for removing drugs from inside fullerenes as well as from a self assembled DNA tetrahedron. The state of the device can be determined by measuring the separation between donor and acceptor fluorophores using FRET.
DNA walkers are another type of DNA machine.
See also
DNA nanotechnology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In prokaryotes, what is composed of a single, double-stranded dna molecule in the form of a loop or circle?
A. allele
B. rNA
C. chromosomes
D. genome
Answer:
|
|
sciq-9189
|
multiple_choice
|
Electrons flow through wires to create what?
|
[
"balanced reaction",
"electric current",
"hydroelectric power",
"electromagnetism"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
The History and Present State of Electricity (1767), by eighteenth-century British polymath Joseph Priestley, is a survey of the study of electricity up until 1766, as well as a description of experiments by Priestley himself.
Background
Priestley became interested in electricity while he was teaching at Warrington Academy. Friends introduced him to the major British experimenters in the field: John Canton, William Watson, and Benjamin Franklin. These men encouraged Priestley to perform the experiments he was writing about in his history; they believed that he could better describe the experiments if he had performed them himself. In the process of replicating others' experiments, however, Priestley became intrigued by the still unanswered questions regarding electricity and was prompted to design and undertake his own experiments.
Priestley possessed an electrical machine designed by Edward Nairne. With his brother Timothy he designed and constructed his own machines (see Timothy Priestley#Scientific apparatus).
Contents
The first half of the 700-page book is a history of the study of electricity. It is parted into ten periods, starting with early experiments "prior to those of Mr. Hawkesbee", finishing with variable experiments and discoveries made after Franklin's own experiments. The book takes Franklin's work into focus, which was criticised by contemporary scholars, especially in France and Germany.
The second and more influential half contents a description of contemporary theories about electricity and suggestions for future research. Priestley also wrote about the construction and use of electrical machines, basic electrical experiments and "practical maxims for the usw of young elecricians". In the second edition, Priestley added some of his own discoveries, such as the conductivity of charcoal. This discovery overturned what he termed "one of the earliest and universally received maxims of electricity," that only water and metals could conduct electri
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Electrons flow through wires to create what?
A. balanced reaction
B. electric current
C. hydroelectric power
D. electromagnetism
Answer:
|
|
sciq-7136
|
multiple_choice
|
The first reported nuclear fission occurred in 1939 when three german scientists bombarded uranium-235 atoms with slow-moving what?
|
[
"isotopes",
"neutrons",
"electrons",
"protons"
] |
B
|
Relavent Documents:
Document 0:::
Nuclear fission was discovered in December 1938 by chemists Otto Hahn and Fritz Strassmann and physicists Lise Meitner and Otto Robert Frisch. Fission is a nuclear reaction or radioactive decay process in which the nucleus of an atom splits into two or more smaller, lighter nuclei and often other particles. The fission process often produces gamma rays and releases a very large amount of energy, even by the energetic standards of radioactive decay. Scientists already knew about alpha decay and beta decay, but fission assumed great importance because the discovery that a nuclear chain reaction was possible led to the development of nuclear power and nuclear weapons. Hahn was awarded the 1944 Nobel Prize in Chemistry for the discovery of nuclear fission.
Hahn and Strassmann at the Kaiser Wilhelm Institute for Chemistry in Berlin bombarded uranium with slow neutrons and discovered that barium had been produced. Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. They reported their findings by mail to Meitner in Sweden, who a few months earlier had fled Nazi Germany. Meitner and her nephew Frisch theorised, and then proved, that the uranium nucleus had been split and published their findings in Nature. Meitner calculated that the energy released by each disintegration was approximately 200 megaelectronvolts, and Frisch observed this. By analogy with the division of biological cells, he named the process "fission".
The discovery came after forty years of investigation into the nature and properties of radioactivity and radioactive substances. The discovery of the neutron by James Chadwick in 1932 created a new means of nuclear transmutation. Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons, and Fermi concluded that his experiments had created new elements with 93 and 94 protons, which his group dubbed ausenium and hesperium. Fermi won the 1938 Nobel Prize in Physics
Document 1:::
A natural nuclear fission reactor is a uranium deposit where self-sustaining nuclear chain reactions occur. The conditions under which a natural nuclear reactor could exist were predicted in 1956 by Paul Kuroda. The remnants of an extinct or fossil nuclear fission reactor, where self-sustaining nuclear reactions have occurred in the past, are verified by analysis of isotope ratios of uranium and of the fission products (and the stable daughter nuclides of those fission products). This was first discovered in 1972 in Oklo, Gabon by Francis Perrin under conditions very similar to Kuroda's predictions.
Oklo is the only location where this phenomenon is known to have occurred, and consists of 16 sites with patches of centimeter-sized ore layers. There, self-sustaining nuclear fission reactions are thought to have taken place approximately 1.7 billion years ago, during the Statherian period of the Paleoproterozoic, and continued for a few hundred thousand years, probably averaging less than 100 kW of thermal power during that time.
History
In May 1972 at the Tricastin uranium enrichment site at Pierrelatte in France, routine mass spectrometry comparing UF6 samples from the Oklo Mine, located in Gabon, showed a discrepancy in the amount of the isotope. Normally the concentration is 0.72% while these samples had only 0.60%, a significant difference (some 17% less U-235 was contained in the samples than expected). This discrepancy required explanation, as all civilian uranium handling facilities must meticulously account for all fissionable isotopes to ensure that none are diverted to the construction of nuclear weapons. Furthermore since fissile material is why people mine uranium, a significant amount "going missing" was also of direct economic concern.
Thus the French Commissariat à l'énergie atomique (CEA) began an investigation. A series of measurements of the relative abundances of the two most significant isotopes of the uranium mined at Oklo showed anomalous
Document 2:::
A calutron is a mass spectrometer originally designed and used for separating the isotopes of uranium. It was developed by Ernest Lawrence during the Manhattan Project and was based on his earlier invention, the cyclotron. Its name was derived from California University Cyclotron, in tribute to Lawrence's institution, the University of California, where it was invented. Calutrons were used in the industrial-scale Y-12 uranium enrichment plant at the Clinton Engineer Works in Oak Ridge, Tennessee. The enriched uranium produced was used in the Little Boy atomic bomb that was detonated over Hiroshima on 6 August 1945.
The calutron is a type of sector mass spectrometer, an instrument in which a sample is ionized and then accelerated by electric fields and deflected by magnetic fields. The ions ultimately collide with a plate and produce a measurable electric current. Since the ions of the different isotopes have the same electric charge but different masses, the heavier isotopes are deflected less by the magnetic field, causing the beam of particles to separate into several beams by mass, striking the plate at different locations. The mass of the ions can be calculated according to the strength of the field and the charge of the ions. During World War II, calutrons were developed to use this principle to obtain substantial quantities of high-purity uranium-235, by taking advantage of the small mass difference between uranium isotopes.
Electromagnetic separation for uranium enrichment was abandoned in the post-war period in favor of the more complicated, but more efficient, gaseous diffusion method. Although most of the calutrons of the Manhattan Project were dismantled at the end of the war, some remained in use to produce isotopically enriched samples of naturally occurring elements for military, scientific and medical purposes.
Origins
News of the discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanatio
Document 3:::
Spontaneous fission (SF) is a form of radioactive decay in which a heavy atomic nucleus splits into two or more lighter nuclei. In comparison to induced fission, there is no inciting particle to trigger the decay, it is a purely probabilistic process.
Spontaneous fission is a dominant decay mode for superheavy elements, with nuclear stability generally falling as nuclear mass increases. It thus forms a practical limit to heavy element nucleon number. Heavier nuclides may be created instantaneously by physical processes, both natural (via the r-process) and artificial, though rapidly decay to more stable nuclides. As such, apart from minor decay branches in primordial radionuclides, spontaneous fission is not observed in nature.
Observed fission half-lives range from 4.1 microseconds () to greater than the current age of the universe ().
History
Following the discovery of induced fission by Otto Hahn and Fritz Strassmann in 1938, Soviet physicists Georgy Flyorov and Konstantin Petrzhak began conducting experiments to explore the effects of incident neutron energy on uranium nuclei. Their equipment recorded fission fragments even when no neutrons were present to induce the decay, and the effect persisted even after the equipment was moved 60m underground into the tunnels of the Moscow Metro's Dinamo station in an effort to insulate it from the effects of cosmic rays. The discovery of induced fission itself had come as a surprise, and no other mechanism was known that could account for the observed decays. Such an effect could only be explained by spontaneous fission of the uranium nuclei without external influence.
Mechanism
Spontaneous fission arises as a result of competition between the attractive properties of the strong nuclear force and the mutual coulombic repulsion of the constituent protons. Nuclear binding energy increases in proportion to atomic mass number (A), however coulombic repulsion increases with proton number (Z) squared. Thus, at high mass an
Document 4:::
Gun-type fission weapons are fission-based nuclear weapons whose design assembles their fissile material into a supercritical mass by the use of the "gun" method: shooting one piece of sub-critical material into another. Although this is sometimes pictured as two sub-critical hemispheres driven together to make a supercritical sphere, typically a hollow projectile is shot onto a spike, which fills the hole in its center. Its name is a reference to the fact that it is shooting the material through an artillery barrel as if it were a projectile.
Since it is a relatively slow method of assembly, plutonium cannot be used unless it is purely the 239 isotope. Production of impurity-free plutonium is very difficult and is impractical. The required amount of uranium is relatively large, and thus the overall efficiency is relatively low. The main reason for this is the uranium metal does not undergo compression (and resulting density increase) as does the implosion design. Instead, gun type bombs assemble the supercritical mass by amassing such a large quantity of uranium that the overall distance through which daughter neutrons must travel has so many mean free paths it becomes very probable most neutrons will find uranium nuclei to collide with, before escaping the supercritical mass.
The first time gun-type fission weapons were discussed was as part of the British Tube Alloys nuclear bomb development program, the world's first nuclear bomb development program. The British MAUD Report of 1941 laid out how "an effective uranium bomb which, containing some 25 lb of active material, would be equivalent as regards destructive effect to 1,800 tons of T.N.T". The bomb would use the gun-type design "to bring the two halves together at high velocity and it is proposed to do this by firing them together with charges of ordinary explosive in a form of double gun".
The method was applied in four known US programs. First, the "Little Boy" weapon which was detonated over Hiroshima
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The first reported nuclear fission occurred in 1939 when three german scientists bombarded uranium-235 atoms with slow-moving what?
A. isotopes
B. neutrons
C. electrons
D. protons
Answer:
|
|
sciq-9183
|
multiple_choice
|
An alloy is a mixture of what with one or more other substances?
|
[
"metal",
"acid",
"water",
"protein"
] |
A
|
Relavent Documents:
Document 0:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
Document 1:::
Oligocrystalline material owns a microstructure consisting of a few coarse grains, often columnar and parallel to the longitudinal ingot axis. This microstructure can be found in the ingots produced by electron beam melting (EBM).
Document 2:::
High-entropy alloys (HEAs) are alloys that are formed by mixing equal or relatively large proportions of (usually) five or more elements. Prior to the synthesis of these substances, typical metal alloys comprised one or two major components with smaller amounts of other elements. For example, additional elements can be added to iron to improve its properties, thereby creating an iron-based alloy, but typically in fairly low proportions, such as the proportions of carbon, manganese, and others in various steels. Hence, high-entropy alloys are a novel class of materials. The term "high-entropy alloys" was coined by Taiwanese scientist Jien-Wei Yeh because the entropy increase of mixing is substantially higher when there is a larger number of elements in the mix, and their proportions are more nearly equal. Some alternative names, such as multi-component alloys, compositionally complex alloys and multi-principal-element alloys are also suggested by other researchers.
These alloys are currently the focus of significant attention in materials science and engineering because they have potentially desirable properties.
Furthermore, research indicates that some HEAs have considerably better strength-to-weight ratios, with a higher degree of fracture resistance, tensile strength, and corrosion and oxidation resistance than conventional alloys. Although HEAs have been studied since the 1980s, research substantially accelerated in the 2010s.
Development
Although HEAs were considered from a theoretical standpoint as early as 1981 and 1996, and throughout the 1980s, in 1995 Taiwanese scientist Jien-Wei Yeh came up with his idea for ways of actually creating high-entropy alloys, while driving through the Hsinchu, Taiwan, countryside. Soon after, he decided to begin creating these special alloys in his lab, being in the only region researching these alloys for over a decade. Most countries in Europe, the United States, and other parts of the world lagged behind in the developme
Document 3:::
can be broadly divided into metals, metalloids, and nonmetals according to their shared physical and chemical properties. All metals have a shiny appearance (at least when freshly polished); are good conductors of heat and electricity; form alloys with other metals; and have at least one basic oxide. Metalloids are metallic-looking brittle solids that are either semiconductors or exist in semiconducting forms, and have amphoteric or weakly acidic oxides. Typical nonmetals have a dull, coloured or colourless appearance; are brittle when solid; are poor conductors of heat and electricity; and have acidic oxides. Most or some elements in each category share a range of other properties; a few elements have properties that are either anomalous given their category, or otherwise extraordinary.
Properties
Metals
Metals appear lustrous (beneath any patina); form mixtures (alloys) when combined with other metals; tend to lose or share electrons when they react with other substances; and each forms at least one predominantly basic oxide.
Most metals are silvery looking, high density, relatively soft and easily deformed solids with good electrical and thermal conductivity, closely packed structures, low ionisation energies and electronegativities, and are found naturally in combined states.
Some metals appear coloured (Cu, Cs, Au), have low densities (e.g. Be, Al) or very high melting points (e.g. W, Nb), are liquids at or near room temperature (e.g. Hg, Ga), are brittle (e.g. Os, Bi), not easily machined (e.g. Ti, Re), or are noble (hard to oxidise, e.g. Au, Pt), or have nonmetallic structures (Mn and Ga are structurally analogous to, respectively, white P and I).
Metals comprise the large majority of the elements, and can be subdivided into several different categories. From left to right in the periodic table, these categories include the highly reactive alkali metals; the less-reactive alkaline earth metals, lanthanides, and radioactive actinides; the archetypal tran
Document 4:::
A metalloid is a type of chemical element which has a preponderance of properties in between, or that are a mixture of, those of metals and nonmetals. There is no standard definition of a metalloid and no complete agreement on which elements are metalloids. Despite the lack of specificity, the term remains in use in the literature of chemistry.
The six commonly recognised metalloids are boron, silicon, germanium, arsenic, antimony and tellurium. Five elements are less frequently so classified: carbon, aluminium, selenium, polonium and astatine. On a standard periodic table, all eleven elements are in a diagonal region of the p-block extending from boron at the upper left to astatine at lower right. Some periodic tables include a dividing line between metals and nonmetals, and the metalloids may be found close to this line.
Typical metalloids have a metallic appearance, but they are brittle and only fair conductors of electricity. Chemically, they behave mostly as nonmetals. They can form alloys with metals. Most of their other physical properties and chemical properties are intermediate in nature. Metalloids are usually too brittle to have any structural uses. They and their compounds are used in alloys, biological agents, catalysts, flame retardants, glasses, optical storage and optoelectronics, pyrotechnics, semiconductors, and electronics.
The electrical properties of silicon and germanium enabled the establishment of the semiconductor industry in the 1950s and the development of solid-state electronics from the early 1960s.
The term metalloid originally referred to nonmetals. Its more recent meaning, as a category of elements with intermediate or hybrid properties, became widespread in 1940–1960. Metalloids are sometimes called semimetals, a practice that has been discouraged, as the term semimetal has a different meaning in physics than in chemistry. In physics, it refers to a specific kind of electronic band structure of a substance. In this context, only
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An alloy is a mixture of what with one or more other substances?
A. metal
B. acid
C. water
D. protein
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.