id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-3061
|
multiple_choice
|
What type of roots enable a plant to grow on another plant?
|
[
"mites",
"endemic",
"Sickness",
"epiphytic"
] |
D
|
Relavent Documents:
Document 0:::
Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods.
Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s.
Steps
In short, steps of micropropagation can be divided into four stages:
Selection of mother plant
Multiplication
Rooting and acclimatizing
Transfer new plant to soil
Selection of mother plant
Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used; including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micro nutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media f
Document 1:::
A parasitic plant is a plant that derives some or all of its nutritional requirements from another living plant. They make up about 1% of angiosperms and are found in almost every biome. All parasitic plants develop a specialized organ called the haustorium, which penetrates the host plant, connecting them to the host vasculature – either the xylem, phloem, or both. For example, plants like Striga or Rhinanthus connect only to the xylem, via xylem bridges (xylem-feeding). Alternately, plants like Cuscuta and some members of Orobanche connect to both the xylem and phloem of the host. This provides them with the ability to extract resources from the host. These resources can include water, nitrogen, carbon and/or sugars. Parasitic plants are classified depending on the location where the parasitic plant latches onto the host (root or stem), the amount of nutrients it requires, and their photosynthetic capability. Some parasitic plants can locate their host plants by detecting volatile chemicals in the air or soil given off by host shoots or roots, respectively. About 4,500 species of parasitic plants in approximately 20 families of flowering plants are known.
There is a wide range of effects that may occur to a host plant due to the presence of a parasitic plant. Often there is a pattern of stunted growth in hosts especially in hemi-parasitic cases, but may also result in higher mortality rates in host plant species following introduction of larger parasitic plant populations.
Classification
Parasitic plants occur in multiple plant families, indicating that the evolution is polyphyletic. Some families consist mostly of parasitic representatives such as Balanophoraceae, while other families have only a few representatives. One example is the North American Monotropa uniflora (Indian pipe or corpse plant) which is a member of the heath family, Ericaceae, better known for its member blueberries, cranberries, and rhododendrons.
Parasitic plants are characterized as
Document 2:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
Document 3:::
Hairy root culture, also called transformed root culture, is a type of plant tissue culture that is used to study plant metabolic processes or to produce valuable secondary metabolites or recombinant proteins, often with plant genetic engineering.
A naturally occurring soil bacterium Agrobacterium rhizogenes that contains root-inducing plasmids (also called Ri plasmids) can infect plant roots and cause them to produce a food source for the bacterium, opines, and to grow abnormally. The abnormal roots are particularly easy to culture in artificial media because hormones are not needed in contrast to adventitious roots, and they are neoplastic, with indefinite growth. The neoplastic roots produced by A. rhizogenes infection have a high growth rate (compared to untransformed adventitious roots), as well as genetic and biochemical stability.
Currently the main constraint for commercial utilization of hairy root culture is the development and up-scaling of appropriate (bioreactors) vessels for the delicate and sensitive hairy roots.
Some of the applied research on utilization of hairy root cultures has been and is conducted at VTT Technical Research Centre of Finland Ltd.
Other labs working on hairy roots are the phytotechnology lab of Amiens University and the Arkansas Biosciences Institute.
Metabolic studies
Hairy root cultures can be used for phytoremediation, and are particularly valuable for studies of the metabolic processes involved in phytoremediation.
Further applications include detailed studies of fundamental molecular, genetic and biochemical aspects of genetic transformation and of hairy root induction.
Genetically transformed cultures
The Ri plasmids can be engineered to also contain T-DNA, used for genetic transformation (biotransformation) of the plant cells. The resulting genetically transformed root cultures can produce high levels of secondary metabolites, comparable or even higher than those of intact plants.
Use in plant propagation
Hairy
Document 4:::
Root vegetables are underground plant parts eaten by humans as food. Although botany distinguishes true roots (such as taproots and tuberous roots) from non-roots (such as bulbs, corms, rhizomes, and tubers, although some contain both hypocotyl and taproot tissue), the term "root vegetable" is applied to all these types in agricultural and culinary usage (see terminology of vegetables).
Root vegetables are generally storage organs, enlarged to store energy in the form of carbohydrates. They differ in the concentration and the balance among starches, sugars, and other types of carbohydrate. Of particular economic importance are those with a high carbohydrate concentration in the form of starch; starchy root vegetables are important staple foods, particularly in tropical regions, overshadowing cereals throughout much of Central Africa, West Africa and Oceania, where they are used directly or mashed to make foods such as fufu or poi.
Many root vegetables keep well in root cellars, lasting several months. This is one way of storing food for use long after harvest, which is especially important in nontropical latitudes, where winter is traditionally a time of little to no harvesting. There are also season extension methods that can extend the harvest throughout the winter, mostly through the use of polytunnels.
List of root vegetables
The following list classifies root vegetables organized by their roots' anatomy.
Modified plant stem
Corm
Amorphophallus konjac (konjac)
Colocasia esculenta (taro)
Eleocharis dulcis (Chinese water chestnut)
Ensete spp. (enset)
Nymphaea spp. (waterlily)
Pteridium esculentum
Sagittaria spp. (arrowhead or wapatoo)
Typha spp.
Xanthosoma spp. (malanga, cocoyam, tannia, yautia and other names)
Colocasia antiquorum (eddoe or Japanese potato)
Bulb
Allium cepa (onion)
Allium sativum (garlic)
Camassia quamash (blue camas)
Foeniculum vulgare (fennel)
Rhizome
Curcuma longa (turmeric)
Panax ginseng (ginseng)
Arthropodium spp. (
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of roots enable a plant to grow on another plant?
A. mites
B. endemic
C. Sickness
D. epiphytic
Answer:
|
|
sciq-9346
|
multiple_choice
|
According to one of einstein's theory, while light consists of particles, it behaves like this.
|
[
"waves",
"tides",
"molecules",
"thermodynamics"
] |
A
|
Relavent Documents:
Document 0:::
Matter waves are a central part of the theory of quantum mechanics, being half of wave–particle duality. All matter exhibits wave-like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave.
The concept that matter behaves like a wave was proposed by French physicist Louis de Broglie () in 1924, and so matter waves are also known as de Broglie waves.
The de Broglie wavelength is the wavelength, , associated with a particle with momentum through the Planck constant, :
Wave-like behavior of matter was first experimentally demonstrated by George Paget Thomson and Alexander Reid's transmission diffraction experiment, and independently in the Davisson–Germer experiment, both using electrons; and it has also been confirmed for other elementary particles, neutral atoms and molecules.
Introduction
Background
At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell's equations, while matter was thought to consist of localized particles (see history of wave and particle duality). In 1900, this division was questioned when, investigating the theory of black-body radiation, Max Planck proposed that the thermal energy of oscillating atoms is divided into discrete portions, or quanta. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed in 1905 that light is also propagated and absorbed in quanta, now called photons. These quanta would have an energy given by the Planck–Einstein relation:
and a momentum vector
where (lowercase Greek letter nu) and (lowercase Greek letter lambda) denote the frequency and wavelength of the light, the speed of light, and the Planck constant. In the modern convention, frequency is symbolized by as is done in the rest of this article. Einstein's postulate was verified experimentally by K. T. Compton and O. W. Richardson and by A. L. Hugh
Document 1:::
Young's interference experiment, also called Young's double-slit interferometer, was the original version of the modern double-slit experiment, performed at the beginning of the nineteenth century by Thomas Young. This experiment played a major role in the general acceptance of the wave theory of light. In Young's own judgement, this was the most important of his many achievements.
Theories of light propagation in the 17th and 18th centuries
During this period, many scientists proposed a wave theory of light based on experimental observations, including Robert Hooke, Christiaan Huygens and Leonhard Euler. However, Isaac Newton, who did many experimental investigations of light, had rejected the wave theory of light and developed his corpuscular theory of light according to which light is emitted from a luminous body in the form of tiny particles. This theory held sway until the beginning of the nineteenth century despite the fact that many phenomena, including diffraction effects at edges or in narrow apertures, colours in thin films and insect wings, and the apparent failure of light particles to crash into one another when two light beams crossed, could not be adequately explained by the corpuscular theory which, nonetheless, had many eminent supporters, including Pierre-Simon Laplace and Jean-Baptiste Biot.
Young's work on wave theory
While studying medicine at Göttingen in the 1790s, Young wrote a thesis on the physical and mathematical properties of sound and in 1800, he presented a paper to the Royal Society (written in 1799) where he argued that light was also a wave motion. His idea was greeted with a certain amount of skepticism because it contradicted Newton's corpuscular theory.
Nonetheless, he continued to develop his ideas. He believed that a wave model could much better explain many aspects of light propagation than the corpuscular model:
In 1801, Young presented a famous paper to the Royal Society entitled "On the Theory of Light and Colours" wh
Document 2:::
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.
Overview
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical i
Document 3:::
Categories: On the Beauty of Physics is a non-fiction science and art book edited, co-written, and published by American author Hilary Thayer Hamann in 2006. The book was conceived as a multidisciplinary educational tool that uses art and literature to broaden the reader's understanding of challenging material. Alan Lightman, author of Einstein's Dreams, called Categories "A beautiful synthesis of science and art, pleasing to the mind and to the eye," and Dr. Helen Caldicott, founder and president of the Nuclear Policy Research Institute, said, "This wonderful book will provoke thought in lovers of science and art alike, and with knowledge comes the inspiration to preserve the beauty of life on Earth."
Author
Hamann is co-writer, creative and editorial director of Categories—On the Beauty of Physics (2006), a multidisciplinary, interdisciplinary educational text that uses imagery to facilitate the reader's encounter with challenging material. She worked with physicist Emiliano Seffusati, Ph.D., who wrote the science text, and collage artist John Morse, who created the original artwork.
Overview
Categories is a book about physics that uses literature and art to stimulate the wonder and interest of the reader. It is intended to promote scientific literacy, foster an appreciation of the humanities, and encourage readers to make informed and imaginative connections between the sciences and the arts.
Hamann intended the physics book to be the first in a series, with subsequent titles to focus on biology and chemistry, and for the three titles to form the cornerstone of a television series for adolescents and their parents.
Criticism
Library Journal gave the book a starred review, calling Categories "a gorgeous book," "a comprehensive overview of physics," and "highly recommended."
The book received high praise from critics and scientists.
Cognitive scientist, Harvard professor, and author of The Language Instinct (1994), and How the Mind Works (1997) Steven Pinker
Document 4:::
This timeline describes the major developments, both experimental and theoretical, of:
Einstein’s special theory of relativity (SR),
its predecessors like the theories of luminiferous aether,
its early competitors, i.e.:
Ritz’s ballistic theory of light,
the models of electromagnetic mass created by Abraham (1902), Lorentz (1904), Bucherer (1904) and Langevin (1904).
This list also mentions the origins of standard notation (like c) and terminology (like theory of relavity).
Criteria for inclusion
Theories other than SR are not described here exhaustively, but only to the extent that is directly relevant to SR – i.e. at points when they:
anticipated some elements of SR, like Fresnel’s hypothesis of partial aether drag,
led to new experiments testing SR, like Stokes’s model of complete aether drag,
were disproved or questioned, e.g. by the experiments of Oliver Lodge.
For a more detailed timeline of aether theories – e.g. their emergence with the wave theory of light – see a separate article. Also, not all experiments are listed here – repetitions, even with much higher precision than the original, are mentioned only if they influence or challenge the opinions at their time. It was the case with:
Michelson and Morley (1886) repeating the experiment of Fizeau (1851), contradicting Michelson’s interpretation of his 1881 experiment;
Michelson–Morley (1887), more conclusive than the original experiment by Michelson (1881) and difficult to reconcile with their experiment of 1886, or other first-order measurements;
Kaufmann’s 1906 repetition of his 1902 experiment, because he claimed to contradict the model of Einstein and Lorentz, considered consistent with the data from 1902;
Miller (1933) or Marinov (1974), with results different than Michelson–Morley.
For lists of repetitions, see the articles of particular experiments. The measurements of speed of light are also mentioned only to the minimum extent, i.e. when they proved for the first time that c is f
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
According to one of einstein's theory, while light consists of particles, it behaves like this.
A. waves
B. tides
C. molecules
D. thermodynamics
Answer:
|
|
sciq-976
|
multiple_choice
|
The diffusion of water across a membrane because of a difference in concentration is called?
|
[
"osmosis",
"hemostasis",
"diffusion",
"absorption"
] |
A
|
Relavent Documents:
Document 0:::
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
Passive transport follows Fick's first law.
Diffusion
Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient").
However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorph
Document 1:::
Water potential is the potential energy of water per unit volume relative to pure water in reference conditions. Water potential quantifies the tendency of water to move from one area to another due to osmosis, gravity, mechanical pressure and matrix effects such as capillary action (which is caused by surface tension). The concept of water potential has proved useful in understanding and computing water movement within plants, animals, and soil. Water potential is typically expressed in potential energy per unit volume and very often is represented by the Greek letter ψ.
Water potential integrates a variety of different potential drivers of water movement, which may operate in the same or different directions. Within complex biological systems, many potential factors may be operating simultaneously. For example, the addition of solutes lowers the potential (negative vector), while an increase in pressure increases the potential (positive vector). If the flow is not restricted, water will move from an area of higher water potential to an area that is lower potential. A common example is water with dissolved salts, such as seawater or the fluid in a living cell. These solutions have negative water potential, relative to the pure water reference. With no restriction on flow, water will move from the locus of greater potential (pure water) to the locus of lesser (the solution); flow proceeds until the difference in potential is equalized or balanced by another water potential factor, such as pressure or elevation.
Components of water potential
Many different factors may affect the total water potential, and the sum of these potentials determines the overall water potential and the direction of water flow:
where:
is the reference correction,
is the solute or osmotic potential,
is the pressure component,
is the gravimetric component,
is the potential due to humidity, and
is the potential due to matrix effects (e.g., fluid cohesion and surface tension.)
Document 2:::
The convection–diffusion equation is a combination of the diffusion and convection (advection) equations, and describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, or (generic) scalar transport equation.
Equation
General
The general equation is
where
is the variable of interest (species concentration for mass transfer, temperature for heat transfer),
is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport,
is the velocity field that the quantity is moving with. It is a function of time and space. For example, in advection, might be the concentration of salt in a river, and then would be the velocity of the water flow as a function of time and location. Another example, might be the concentration of small bubbles in a calm lake, and then would be the velocity of bubbles rising towards the surface by buoyancy (see below) depending on time and location of the bubble. For multiphase flows and flows in porous media, is the (hypothetical) superficial velocity.
describes sources or sinks of the quantity . For example, for a chemical species, means that a chemical reaction is creating more of the species, and means that a chemical reaction is destroying the species. For heat transport, might occur if thermal energy is being generated by friction.
represents gradient and represents divergence. In this equation, represents concentration gradient.
Understanding the terms involved
The right-hand side of the equation is the sum of three contributions.
The first, , describes diffusion. Imagine that is the concentration of a chemical. When concentration is low somewhere compared to the surrounding areas (e.g. a local minimum of concentration), t
Document 3:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 4:::
Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer.
Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law:
where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α:
Transport phenomena
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The diffusion of water across a membrane because of a difference in concentration is called?
A. osmosis
B. hemostasis
C. diffusion
D. absorption
Answer:
|
|
sciq-6364
|
multiple_choice
|
What substance comes toward earth's crust through mantle plumes?
|
[
"water",
"gas",
"magma",
"rocks"
] |
C
|
Relavent Documents:
Document 0:::
Mantle convection is the very slow creeping motion of Earth's solid silicate mantle as convection currents carry heat from the interior to the planet's surface.
The Earth's surface lithosphere rides atop the asthenosphere and the two form the components of the upper mantle. The lithosphere is divided into a number of tectonic plates that are continuously being created or consumed at plate boundaries. Accretion occurs as mantle is added to the growing edges of a plate, associated with seafloor spreading. Upwelling beneath the spreading centers is a shallow, rising component of mantle convection and in most cases not directly linked to the global mantle upwelling. The hot material added at spreading centers cools down by conduction and convection of heat as it moves away from the spreading centers. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction usually at an ocean trench. Subduction is the descending component of mantle convection.
This subducted material sinks through the Earth's interior. Some subducted material appears to reach the lower mantle, while in other regions, this material is impeded from sinking further, possibly due to a phase transition from spinel to silicate perovskite and magnesiowustite, an endothermic reaction.
The subducted oceanic crust triggers volcanism, although the basic mechanisms are varied. Volcanism may occur due to processes that add buoyancy to partially melted mantle, which would cause upward flow of the partial melt due to decrease in its density. Secondary convection may cause surface volcanism as a consequence of intraplate extension and mantle plumes. In 1993 it was suggested that inhomogeneities in D" layer have some impact on mantle convection.
Mantle convection causes tectonic plates to move around the Earth's surface.
Types of convection
During the late 20th century, there was significant debate within the geo
Document 1:::
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
The D″ region
The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r
Document 2:::
Lunar swirls are enigmatic features found across the Moon's surface, which are characterized by having a high albedo, appearing optically immature (i.e. having the optical characteristics of a relatively young regolith), and (often) having a sinuous shape. Their curvilinear shape is often accentuated by low albedo regions that wind between the bright swirls. They appear to overlay the lunar surface, superposed on craters and ejecta deposits, but impart no observable topography. Swirls have been identified on the lunar maria and on highlands - they are not associated with a specific lithologic composition. Swirls on the maria are characterized by strong albedo contrasts and complex, sinuous morphology, whereas those on highland terrain appear less prominent and exhibit simpler shapes, such as single loops or diffuse bright spots.
Association with magnetic anomalies
The lunar swirls are coincident with regions of the magnetic field of the Moon with relatively high strength on a planetary body that lacks, and may never have had, an active core dynamo with which to generate its own magnetic field. Every swirl has an associated magnetic anomaly, but not every magnetic anomaly has an identifiable swirl. Orbital magnetic field mapping by the Apollo 15 and 16 sub-satellites, Lunar Prospector, and Kaguya show regions with a local magnetic field. Because the Moon has no currently active global magnetic field, these regional anomalies are regions of remnant magnetism; their origin remains controversial.
Formation models
There are three leading models for swirl formation. Each model must address two characteristics of lunar swirls formation, namely that a swirl is optically immature, and that it is associated with magnetic anomaly.
Models for creation of the magnetic anomalies associated with lunar swirls point to the observation that several of the magnetic anomalies are antipodal to the younger, large impact basins on the Moon.
Cometary impact model
This theory argues tha
Document 3:::
A mantle plume is a proposed mechanism of convection within the Earth's mantle, hypothesized to explain anomalous volcanism. Because the plume head partially melts on reaching shallow depths, a plume is often invoked as the cause of volcanic hotspots, such as Hawaii or Iceland, and large igneous provinces such as the Deccan and Siberian Traps. Some such volcanic regions lie far from tectonic plate boundaries, while others represent unusually large-volume volcanism near plate boundaries.
Concepts
Mantle plumes were first proposed by J. Tuzo Wilson in 1963 and further developed by W. Jason Morgan in 1971 and 1972. A mantle plume is posited to exist where super-heated material forms (nucleates) at the core-mantle boundary and rises through the Earth's mantle. Rather than a continuous stream, plumes should be viewed as a series of hot bubbles of material. Reaching the brittle upper Earth's crust they form diapirs. These diapirs are "hotspots" in the crust. In particular, the concept that mantle plumes are fixed relative to one another and anchored at the core-mantle boundary would provide a natural explanation for the time-progressive chains of older volcanoes seen extending out from some such hotspots, for example, the Hawaiian–Emperor seamount chain. However, paleomagnetic data show that mantle plumes can also be associated with Large Low Shear Velocity Provinces (LLSVPs) and do move relative to each other.
The current mantle plume theory is that material and energy from Earth's interior are exchanged with the surface crust in two distinct and largely independent convective flows:
as previously theorized and widely accepted, the predominant, steady state plate tectonic regime driven by upper mantle convection, mainly the sinking of cold plates of lithosphere back into the asthenosphere.
the punctuated, intermittently dominant mantle overturn regime driven by plume convection that carries heat upward from the core-mantle boundary in a narrow column. This secon
Document 4:::
Intraplate volcanism is volcanism that takes place away from the margins of tectonic plates. Most volcanic activity takes place on plate margins, and there is broad consensus among geologists that this activity is explained well by the theory of plate tectonics. However, the origins of volcanic activity within plates remains controversial.
Mechanisms
Mechanisms that have been proposed to explain intraplate volcanism include mantle plumes; non-rigid motion within tectonic plates (the plate model); and impact events. It is likely that different mechanisms accounts for different cases of intraplate volcanism.
Plume model
A mantle plume is a proposed mechanism of convection of abnormally hot rock within the Earth's mantle. Because the plume head partly melts on reaching shallow depths, a plume is often invoked as the cause of volcanic hotspots, such as Hawaii or Iceland, and large igneous provinces such as the Deccan and Siberian traps. Some such volcanic regions lie far from tectonic plate boundaries, while others represent unusually large-volume volcanism near plate boundaries.
The hypothesis of mantle plumes has required progressive hypothesis-elaboration leading to variant propositions such as mini-plumes and pulsing plumes.
Concepts
Mantle plumes were first proposed by J. Tuzo Wilson in 1963 and further developed by W. Jason Morgan in 1971. A mantle plume is posited to exist where hot rock nucleates at the core-mantle boundary and rises through the Earth's mantle becoming a diapir in the Earth's crust. In particular, the concept that mantle plumes are fixed relative to one another, and anchored at the core-mantle boundary, would provide a natural explanation for the time-progressive chains of older volcanoes seen extending out from some such hot spots, such as the Hawaiian–Emperor seamount chain. However, paleomagnetic data show that mantle plumes can be associated with Large Low Shear Velocity Provinces (LLSVPs) and do move.
Two largely independent convec
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What substance comes toward earth's crust through mantle plumes?
A. water
B. gas
C. magma
D. rocks
Answer:
|
|
sciq-1287
|
multiple_choice
|
What are in leaves that function as solar collectors and food factories?
|
[
"vacuoles",
"fibroblasts",
"chloroplasts",
"cellulose"
] |
C
|
Relavent Documents:
Document 0:::
Autumn leaf color is a phenomenon that affects the normally green leaves of many deciduous trees and shrubs by which they take on, during a few weeks in the autumn season, various shades of yellow, orange, red, purple, and brown. The phenomenon is commonly called autumn colours or autumn foliage in British English and fall colors, fall foliage, or simply foliage in American English.
In some areas of Canada and the United States, "leaf peeping" tourism is a major contribution to economic activity. This tourist activity occurs between the beginning of color changes and the onset of leaf fall, usually around September and October in the Northern Hemisphere and April to May in the Southern Hemisphere.
Chlorophyll and the green/yellow/orange colors
A green leaf is green because of the presence of a pigment known as chlorophyll, which is inside an organelle called a chloroplast. When abundant in the leaf's cells, as during the growing season, the chlorophyll's green color dominates and masks out the colors of any other pigments that may be present in the leaf. Thus, the leaves of summer are characteristically green.
Chlorophyll has a vital function: it captures solar rays and uses the resulting energy in the manufacture of the plant's food simple sugars which are produced from water and carbon dioxide. These sugars are the basis of the plant's nourishment the sole source of the carbohydrates needed for growth and development. In their food-manufacturing process, the chlorophylls break down, thus are continually "used up". During the growing season, however, the plant replenishes the chlorophyll so that the supply remains high and the leaves stay green.
In late summer, with daylight hours shortening and temperatures cooling, the veins that carry fluids into and out of the leaf are gradually closed off as a layer of special cork cells forms at the base of each leaf. As this cork layer develops, water and mineral intake into the leaf is reduced, slowly at first, and the
Document 1:::
Sunstroke plants or heliophytes are adapted to a habitat with a very intensive insolation, because of the construction of its own structure and maintenance (metabolism). Solar plants, for example, are mullein, ling, thyme and soft velcro, white clover, and most roses. They are common in open terrain, rocks, meadows, as well as at the mountain pastures and grasslands and other long sunny exposures.
Special features of the plant include coarse tiny leaves with hairy and waxy protection against excessive light radiation and water loss. In structure, the leaves vary in frequent double palisade layers. Chloroplasts have a protective element such as carotenoid and the enzymes, and accumulation of ROS to avoid toxic effects. In addition, there are also stoma tal apparatus on the leaves and green shoots, in order to allow a better exchange of gases. At same time, this increases possibilities for photosynthesis.
Unlike the shadow-preferring plants, heliophytes have a high light compensation point, and for this they need a higher illumination intensity for effective adoption of carbon dioxide. Sunstroke leaves, in this respect, has a very high capacity, to .
However, they have a higher basal metabolism comparing to the other leaves.
See also
Xerophyte
Thermophyte
Hydrophyte
Halophyte
Document 2:::
A leaf sensor is a phytometric device (measurement of plant physiological processes) that measures water loss or the water deficit stress (WDS) in plants by real-time monitoring the moisture level in plant leaves. The first leaf sensor was developed by LeafSens, an Israeli company granted a US patent for a mechanical leaf thickness sensing device in 2001. LeafSen has made strides incorporating their leaf sensory technology into citrus orchards in Israel. A solid state smart leaf sensor technology was developed by the University of Colorado at Boulder for NASA in 2007. It was designed to help monitor and control agricultural water demand. AgriHouse received a National Science Foundation (NSF) STTR grant in conjunction with the University of Colorado to further develop the solid state leaf sensor technology for precision irrigation control in 2007.
Precision monitoring
Water deficit stress measurements
A Phase I research grant from the National Science Foundation in 2007 showed that the leaf sensor technology has the potential to save between 30% and 50% of irrigation water by reducing irrigation from once every 24 hours to about every 2 to 2.5 days by sensing impending water deficit stress. Leaf sensor technology developed by AgriHouse indicates water deficit stress by measuring the turgor pressure of a leaf, which decreases dramatically at the onset of leaf dehydration. Early detection of impending water deficit stress in plants can be used as an input parameter for precision irrigation control by allowing plants to communicate water requirements directly to humans and/or electronic interfaces. For example, a base system utilizing the wirelessly transmitted information of several sensors appropriately distributed over various sectors of a round field irrigated by a center-pivot irrigation system could tell the irrigation lever exactly when and what field sector needs to be irrigated.
Irrigation control
In a 2008 USDA sponsored field study AgriHouse's SG-1000
Document 3:::
Photosynthetic capacity (Amax) is a measure of the maximum rate at which leaves are able to fix carbon during photosynthesis. It is typically measured as the amount of carbon dioxide that is fixed per metre squared per second, for example as μmol m−2 sec−1.
Limitations
Photosynthetic capacity is limited by carboxylation capacity and electron transport capacity. For example, in high carbon dioxide concentrations or in low light, the plant is not able to regenerate ribulose-1,5-bisphosphate fast enough (also known RUBP, the acceptor molecule in photosynthetic carbon reduction). So in this case, photosynthetic capacity is limited by electron transport of the light reaction, which generates the NADPH and ATP required for the PCR (Calvin) Cycle, and regeneration of RUBP. On the other hand, in low carbon dioxide concentrations, the capacity of the plant to perform carboxylation (adding carbon dioxide to Rubisco) is limited by the amount of available carbon dioxide, with plenty of Rubisco left over.¹ Light response, or photosynthesis-irradiance, curves display these relationships.
Current Research
Recent studies have shown that photosynthetic capacity in leaves can be increased with an increase in the number of stomata per leaf. This could be important in further crop development engineering to increase the photosynthetic efficiency through increasing diffusion of carbon dioxide into the plant.²
Document 4:::
In contrast to the Cladophorales where nuclei are organized in regularly spaced cytoplasmic domains, the cytoplasm of Bryopsidales exhibits streaming, enabling transportation of organelles, transcripts and nutrients across the plant.
The Sphaeropleales also contain several common freshwat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are in leaves that function as solar collectors and food factories?
A. vacuoles
B. fibroblasts
C. chloroplasts
D. cellulose
Answer:
|
|
sciq-7300
|
multiple_choice
|
What lipid, added to certain foods to keep them fresher longer, increases the risk of heart disease?
|
[
"Omega-3 fatty acids",
"fatty acids",
"cholesterol",
"trans fat"
] |
D
|
Relavent Documents:
Document 0:::
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond.
A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation.
Composition of common fats
In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography.
The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component.
Chemistry and nutrition
Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective.
Examples
Document 1:::
Vitamin D and Omega-3 Trial (VITAL) was a clinical trial designed to investigate the use of daily dietary supplements of vitamin D and fish oil.
The sponsor of the study was Brigham and Women's Hospital, collaborating with The National Cancer Institute, National Heart, Lung, and Blood Institute, Office of Dietary Supplements, National Institute of Neurological Disorders and Stroke, National Center for Complementary and Integrative Health, Pharmavite LLC, Pronova BioPharma and BASF.
The studied aimed to enroll 20,000 participants (women 55 or over, men 50 or over) who were randomized into one of four groups:
daily vitamin D (2000 IU) and fish oil (1 g);
daily vitamin D and fish-oil placebo;
daily vitamin-D placebo and fish oil;
daily vitamin-D placebo and fish-oil placebo.
Participants answered annual questionnaires to determine effects the risks of developing cancer, heart disease, stroke, osteoporosis, diabetes, memory loss and depression.
The outcome of this study was:
"The results of this trial indicate that supplementation with either n–3 fatty acid at a dose of 1 g/day or vitamin D3 at a dose of 2000 IU/day was not effective for primary prevention of CV or cancer events among healthy middle-aged men and women over 5 years of follow-up. There was also no difference in progression/development of CKD among patients with type 2 diabetes. This is one of the largest trials on this topic. The finding of a lower MI risk with n–3 fatty acid is hypothesis generating and deserves further study. The authors also noted some interaction with baseline fish consumption, with greater CV benefit observed among participants who had low fish intake at baseline."
Document 2:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 3:::
In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond.
Molecular description
Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats.
Health
Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability.
Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol.
Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d).
In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles.
The Mediterranean diet is one heavily influenced by monounsaturated fats. People in Mediterranean countries consume more total fat than Northern European countries, but most of the fat is in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of satur
Document 4:::
This list consists of common foods with their cholesterol content recorded in milligrams per 100 grams (3.5 ounces) of food.
Functions
Cholesterol is a sterol, a steroid-like lipid made by animals, including humans. The human body makes one-eighth to one-fourth teaspoons of pure cholesterol daily. A cholesterol level of 5.5 millimoles per litre or below is recommended for an adult. The rise of cholesterol in the body can give a condition in which excessive cholesterol is deposited in artery walls called atherosclerosis. This condition blocks the blood flow to vital organs which can result in high blood pressure or stroke.
Cholesterol is not always bad. It's a vital part of the cell wall and a precursor to substances such as brain matter and some sex hormones. There are some types of cholesterol which are beneficial to the heart and blood vessels. High-density lipoprotein is commonly called "good" cholesterol. These lipoproteins help in the removal of cholesterol from the cells, which is then transported back to the liver where it is disintegrated and excreted as waste or broken down into parts.
Cholesterol content of various foods
See also
Nutrition
Plant stanol ester
Fatty acid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What lipid, added to certain foods to keep them fresher longer, increases the risk of heart disease?
A. Omega-3 fatty acids
B. fatty acids
C. cholesterol
D. trans fat
Answer:
|
|
scienceQA-5367
|
multiple_choice
|
Select the reptile below.
|
[
"tiger salamander",
"grass frog",
"Mojave rattlesnake",
"barking tree frog"
] |
C
|
A barking tree frog is an amphibian. It has moist skin and begins its life in water.
There are many kinds of tree frogs. Most tree frogs are very small. They can walk on thin branches.
A grass frog is an amphibian. It has moist skin and begins its life in water.
Frogs live near water or in damp places. Most frogs lay their eggs in water.
A Mojave rattlesnake is a reptile. It has scaly, waterproof skin.
Rattlesnakes have fangs they can use to inject venom into their prey.
A tiger salamander is an amphibian. It has moist skin and begins its life in water.
Tiger salamanders often live in underground burrows.
|
Relavent Documents:
Document 0:::
Iguania is an infraorder of squamate reptiles that includes iguanas, chameleons, agamids, and New World lizards like anoles and phrynosomatids. Using morphological features as a guide to evolutionary relationships, the Iguania are believed to form the sister group to the remainder of the Squamata, which comprise nearly 11,000 named species, roughly 2000 of which are iguanians. However, molecular information has placed Iguania well within the Squamata as sister taxa to the Anguimorpha and closely related to snakes. The order has been under debate and revisions after being classified by Charles Lewis Camp in 1923 due to difficulties finding adequate synapomorphic morphological characteristics. Most Iguanias are arboreal but there are several terrestrial groups. They usually have primitive fleshy, non-prehensile tongues, although the tongue is highly modified in chameleons. The group has a fossil record that extends back to the Early Jurassic (the oldest known member is Bharatagama, which lived about 190 million years ago in what is now India). Today they are scattered occurring in Madagascar, the Fiji and Friendly Islands and Western Hemisphere.
Classification
The Iguania currently include these extant families:
Clade Acrodonta
Family Agamidae – agamid lizards, Old World arboreal lizards
Family Chamaeleonidae – chameleons
Clade Pleurodonta – American arboreal lizards, chuckwallas, iguanas
Family Leiocephalidae
Genus Leiocephalus: curly-tailed lizards
Family Corytophanidae – helmet lizards
Family Crotaphytidae – collared lizards, leopard lizards
Family Hoplocercidae – dwarf and spinytail iguanas
Family Iguanidae – marine, Fijian, Galapagos land, spinytail, rock, desert, green, and chuckwalla iguanas
Family Tropiduridae – tropidurine lizards
subclade of Tropiduridae Tropidurini – neotropical ground lizards
Family Dactyloidae – anoles
Family Polychrotidae
subclade of Polychrotidae Polychrus
Family Phrynosomatidae – North American spiny lizards
Family Liolaem
Document 1:::
The Reptile Database is a scientific database that collects taxonomic information on all living reptile species (i.e. no fossil species such as dinosaurs). The database focuses on species (as opposed to higher ranks such as families) and has entries for all currently recognized ~13,000 species and their subspecies, although there is usually a lag time of up to a few months before newly described species become available online. The database collects scientific and common names, synonyms, literature references, distribution information, type information, etymology, and other taxonomically relevant information.
History
The database was founded in 1995 as EMBL Reptile Database when the founder, Peter Uetz, was a graduate student at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. Thure Etzold had developed the first web interface for the EMBL DNA sequence database which was also used as interface for the Reptile Database. In 2006 the database moved to The Institute of Genomic Research (TIGR) and briefly operated as TIGR Reptile Database until TIGR was merged into the J Craig Venter Institute (JCVI) where Uetz was an associate professor until 2010. Since 2010 the database has been maintained on servers in the Czech Republic under the supervision of Peter Uetz and Jirí Hošek, a Czech programmer. The database celebrated its 25th anniversary together with AmphibiaWeb which had its 20th anniversary in 2021.
Content
As of September 2020, the Reptile Database lists about 11,300 species (including another ~2,200 subspecies) in about 1200 genera (see figure), and has more than 50,000 literature references and about 15,000 photos. The database has constantly grown since its inception with an average of 100 to 200 new species described per year over the preceding decade. Recently, the database also added a more or less complete list of primary type specimens.
Relationship to other databases
The Reptile Database has been a member of the Species 2000 pro
Document 2:::
Toughie was the last known living Rabbs' fringe-limbed treefrog. The species, scientifically known as Ecnomiohyla rabborum, is thought to be extinct, as the last specimen—Toughie—died in captivity on September 26, 2016.
Captivity
Toughie was captured as an adult in Panama in 2005, when researchers went on a conservation mission to rescue species from Batrachochytrium dendrobatidis, a fungus deadly to amphibians. Toughie was one of "several dozen" frogs and tadpoles of the same species to be transported back to the United States.
Toughie lived at the Atlanta Botanical Garden in Georgia. At the Garden, he was placed in a special containment area called the "frogPOD", a biosecure enclosure. Visitors to the Garden are not allowed to visit the frogPOD, as it is used to house critically endangered animals. While in captivity at the Garden, Toughie sired tadpoles with a female, but none survived. After the female died, the only other known specimen in the world was a male, leaving Toughie no other options of reproducing. The other male, who lived at the Zoo Atlanta, was euthanized on February 17, 2012, due to health concerns.
Since Toughie was brought in as an adult to the Garden, they do not know his age but estimated that he was at least 12 years old. On December 15, 2014, Toughie was recorded vocalizing again. It was his first known call since being collected as an adult in 2005.
Toughie died on September 26, 2016, at the Garden.
Personal characteristics
Toughie was given his name by Mark Mandica's son Anthony. Mark Mandica was Toughie's caretaker for many years at the Atlanta Botanical Garden.
Toughie did not like to be handled. He would pinch a handler's hand in an attempt to "say 'let me go'", according to handler Leslie Phillips. She continued with, "For me it is incredibly motivating working with the Rabbs' frog. Having him here is a constant reminder of what can potentially happen to other species if we don't continue the conservation work that we do here a
Document 3:::
Caribherp is an online database containing information on amphibians and reptiles of the Caribbean Islands. It was established in 1999 and serves as a resource for determining the species that occur on specific islands, viewing their distributions, and identifying them by images. Besides the primary search capability by regions and islands, the site features a global search functionality and the ability to refine lists by taxon and origin (endemic or introduced), and to sort by various features. Caribherp also includes common and scientific names, sightings, images, videos, audio of frog calls, distribution maps, geographic regions, and conservation status provided by the International Union for Conservation of Nature (IUCN).
The development and maintenance of Caribherp is accomplished through the work of S. Blair Hedges and his colleagues, and students from Penn State University and (since 2014) Temple University.
Contents
Caribherp database currently contains 1,022 reptile and amphibian species, maps for each species, and about 2000 professional images. This is 5% of the roughly 8,579 amphibian species and 11,940 reptiles species in the world. New species are continually being discovered and described.
Document 4:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the reptile below.
A. tiger salamander
B. grass frog
C. Mojave rattlesnake
D. barking tree frog
Answer:
|
sciq-6614
|
multiple_choice
|
Porifera are parazoans that exhibit simple organization and lack true what?
|
[
"tissues",
"molecules",
"cell membranes",
"nuclei"
] |
A
|
Relavent Documents:
Document 0:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 1:::
Parazoa (Parazoa, gr. Παρα-, para, "next to", and ζωα, zoa, "animals") are a taxon with sub-kingdom category that is located at the base of the phylogenetic tree of the animal kingdom in opposition to the sub-kingdom Eumetazoa; they group together the most primitive forms, characterized by not having proper tissues or that, in any case, these tissues are only partially differentiated. They generally group a single phylum, Porifera, which lack muscles, nerves and internal organs, which in many cases resembles a cell colony rather than a multicellular organism itself. All other animals are eumetazoans, which do have differentiated tissues.
On occasion, Parazoa reunites Porifera with Archaeocyatha, a group of extinct sponges sometimes considered a separate phylum. In other cases, Placozoa is included, depending on the authors.
Porifera and Archaeocyatha
Porifera and Archaeocyatha show similarities such as benthic and sessile habitat and the presence of pores, with differences such as the presence of internal walls and septa in Archaeocyatha. They have been considered separate phyla, however, the consensus is growing that Archaeocyatha was in fact a type of sponge that can be classified into Porifera.
Porifera and Placozoa
Some authors include in Parazoa the poriferous or sponge phyla and Placozoa—comprising only the Trichoplax adhaerens species – on the basis of shared primitive characteristics: Both are simple, show a lack of true tissues and organs, have both asexual and sexual reproduction, and are invariably aquatic. As animals, they are a group that in various studies are at the base of the phylogenetic tree, albeit in a paraphyletic form. Of this group only surviving sponges, which belong to the phylum Porifera, and Trichoplax in the phylum Placozoa.
Parazoa do not show any body symmetry (they are asymmetric); all other groups of animals show some kind of symmetry. There are currently 5000 species, 150 of which are freshwater. The larvae are planktonic and th
Document 2:::
In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues.
Biological organisms follow this hierarchy:
Cells < Tissue < Organ < Organ System < Organism
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Plant tissue
In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue.
Epidermis – Cells forming the outer surface of the leaves and of the young plant body.
Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally.
Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients.
Plant tissues can also be divided differently into two types:
Meristematic tissues
Permanent tissues.
Meristematic tissue
Meristematic tissue consists of actively dividing cell
Document 3:::
Like the nucleus, whether to include the vacuole in the protoplasm concept is controversial.
Terminology
Besides "protoplasm", many other related terms and distinctions were used for the cell contents over time. These were as follows:
Urschleim (Oken, 1802, 1809),
Protoplasma (Purkinje, 1840, von Mohl, 1846),
Primordialschlauch (primordial utricle, von Mohl, 1846),
sarcode (Dujardin, 1835, 1841),
Cytoplasma (Kölliker, 1863),
Hautschicht/Körnerschicht (ectoplasm/endoplasm, Pringsheim, 1854; Hofmeister, 1867),
Grundsubstanz (ground substance, Cienkowski, 1863),
metaplasm/protoplasm (Hanstein, 1868),
deutoplasm/protoplasm (van Beneden, 1870),
bioplasm (Beale, 1872),
paraplasm/protoplasm (Kupffer, 1875),
inter-filar substance theory (Velten, 1876)
Hyaloplasma (Pfeffer, 1877),
Protoplast (Hanstein, 1880),
Enchylema/Hyaloplasma (Hanstein, 1880),
Kleinkörperchen or Mikrosomen (small bodies or microsomes, Hanstein, 1882),
paramitome (Flemming, 1882),
Idioplasma (Nageli, 1884),
Zwischensu
Document 4:::
A leptoid is a type of elongated food-conducting cell like phloem in the stems of some mosses, such as the family Polytrichaceae. They surround strands of water-conducting hydroids. They have some structural and developmental similarities to the sieve elements of seedless vascular plants. At maturity they have inclined end cell walls with small pores and degenerate nuclei. The conduction cells of mosses, leptoids and hydroids, appear similar to those of fossil protracheophytes. However they're not thought to represent an intermediate stage in the evolution of plant vascular tissues but to have had an independent evolutionary origin.
See also
Hydroid, a related water-transporting cell analogous the xylem of vascular plants
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Porifera are parazoans that exhibit simple organization and lack true what?
A. tissues
B. molecules
C. cell membranes
D. nuclei
Answer:
|
|
sciq-6625
|
multiple_choice
|
What plant products may be classified as simple, aggregate, multiple, or accessory, depending on their origin?
|
[
"fruits",
"leaves",
"pods",
"flowers"
] |
A
|
Relavent Documents:
Document 0:::
Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi.
Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida.
There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so.
Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology.
Definition
Taxonomic history
All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d
Document 1:::
In the words of Brahma, the Manu classifies plants as
(1) Osadhi – plants bearing abundant flowers and fruits, but withering away after fructification,
(2) Vanaspati – plants bearing fruits without evident flowers,
(3) Vrksa – trees bearing both flowers and fruits,
(4) Guccha – bushy herbs,
(5) Gulma – succulent shrubs,
Document 2:::
Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb.
Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage.
Modified stems
Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers.
Detailed description of edible plant stems
Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the
Bamboo The edible portion is the young shoot (culm).
Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods.
Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves.
Cauliflower The edible portion is proliferated peduncle and flower tissue.
Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice.
Fig The edible portion is stem tissue. The
Document 3:::
Human uses of plants include both practical uses, such as for food, clothing, and medicine, and symbolic uses, such as in art, mythology and literature. The reliable provision of food through agriculture is the basis of civilization. The study of plant uses by native peoples is ethnobotany, while economic botany focuses on modern cultivated plants. Plants are used in medicine, providing many drugs from the earliest times to the present, and as the feedstock for many industrial products including timber and paper as well as a wide range of chemicals. Plants give millions of people pleasure through gardening.
In art, mythology, religion, literature and film, plants play important roles, symbolising themes such as fertility, growth, purity, and rebirth. In architecture and the decorative arts, plants provide many themes, such as Islamic arabesques and the acanthus forms carved on to classical Corinthian order column capitals.
Context
Culture consists of the social behaviour and norms found in human societies and transmitted through social learning. Cultural universals in all human societies include expressive forms like art, music, dance, ritual, religion, and technologies like tool usage, cooking, shelter, and clothing. The concept of material culture covers physical expressions such as technology, architecture and art, whereas immaterial culture includes principles of social organization, mythology, philosophy, literature, and science. This article describes the many roles played by plants in human culture.
Practical uses
As food
Humans depend on plants for food, either directly or as feed for domestic animals. Agriculture deals with the production of food crops, and has played a key role in the history of world civilizations. Agriculture includes agronomy for arable crops, horticulture for vegetables and fruit, and forestry for timber. About 7,000 species of plant have been used for food, though most of today's food is derived from only 30 species. The major s
Document 4:::
Olericulture is the science of vegetable growing, dealing with the culture of non-woody (herbaceous) plants for food.
Olericulture is the production of plants for use of the edible parts. Vegetable crops can be classified into nine major categories:
Potherbs and greens – spinach and collards
Salad crops – lettuce, celery
Cole crops – cabbage and cauliflower
Root crops (tubers) – potatoes, beets, carrots, radishes
Bulb crops – onions, leeks
Legumes – beans, peas
Cucurbits – melons, squash, cucumber
Solanaceous crops – tomatoes, peppers, potatoes
Sweet corn
Olericulture deals with the production, storage, processing and marketing of vegetables. It encompasses crop establishment, including cultivar selection, seedbed preparation and establishment of vegetable crops by seed and transplants.
It also includes maintenance and care of vegetable crops as well commercial and non-traditional vegetable crop production including organic gardening and organic farming; sustainable agriculture and horticulture; hydroponics; and biotechnology.
See also
Agriculture – the cultivation of animals, plants, fungi and other life forms for food, fiber, and other products used to sustain life.
Horticulture – the industry and science of plant cultivation including the process of preparing soil for the planting of seeds, tubers, or cuttings.
Pomology – a branch of botany that studies and cultivates pome fruit, and sometimes applied more broadly, to the cultivation of any type of fruit.
Tropical horticulture – a branch of horticulture that studies and cultivates garden plants in the tropics, i.e., the equatorial regions of the world.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What plant products may be classified as simple, aggregate, multiple, or accessory, depending on their origin?
A. fruits
B. leaves
C. pods
D. flowers
Answer:
|
|
sciq-10798
|
multiple_choice
|
What method uses a retailer's coding method to catalog biological specimens in museums?
|
[
"metallic barcoding",
"dna barcoding",
"content barcoding",
"structure barcoding"
] |
B
|
Relavent Documents:
Document 0:::
The Investigative Biology Teaching Laboratories are located at Cornell University on the first floor Comstock Hall. They are well-equipped biology teaching laboratories used to provide hands-on laboratory experience to Cornell undergraduate students. Currently, they are the home of the Investigative Biology Laboratory Course, (BioG1500), and frequently being used by the Cornell Institute for Biology Teachers, the Disturbance Ecology course and Insectapalooza. In the past the Investigative Biology Teaching Laboratories hosted the laboratory portion of the Introductory Biology Course with the course number of Bio103-104 (renumbered to BioG1103-1104).
The Investigative Biology Teaching Laboratories house the Science Communication and Public Engagement Undergraduate Minor.
History
Bio103-104
BioG1103-1104 Biological Sciences Laboratory course was a two-semester, two-credit course. BioG1103 was offered in the spring, while 1104 was offered in the fall.
BioG1500
This course was first offered in Fall 2010. It is a one semester course, offered in the Fall, Spring and Summer for 2 credits. One credit is being awarded for the letter and one credit for the three-hour-long lab, following the SUNY system.
Document 1:::
In the social sciences, coding is an analytical process in which data, in both quantitative form (such as questionnaires results) or qualitative form (such as interview transcripts) are categorized to facilitate analysis.
One purpose of coding is to transform the data into a form suitable for computer-aided analysis. This categorization of information is an important step, for example, in preparing data for computer processing with statistical software. Prior to coding, an annotation scheme is defined. It consists of codes or tags. During coding, coders manually add codes into data where required features are identified. The coding scheme ensures that the codes are added consistently across the data set and allows for verification of previously tagged data.
Some studies will employ multiple coders working independently on the same data. This also minimizes the chance of errors from coding and is believed to increase the reliability of data.
Directive
One code should apply to only one category and categories should be comprehensive. There should be clear guidelines for coders (individuals who do the coding) so that code is consistent.
Quantitative approach
For quantitative analysis, data is coded usually into measured and recorded as nominal or ordinal variables.
Questionnaire data can be pre-coded (process of assigning codes to expected answers on designed questionnaire), field-coded (process of assigning codes as soon as data is available, usually during fieldwork), post-coded (coding of open questions on completed questionnaires) or office-coded (done after fieldwork). Note that some of the above are not mutually exclusive.
In social sciences, spreadsheets such as Excel and more advanced software packages such as R, Matlab, PSPP/SPSS, DAP/SAS, MiniTab and Stata are often used.
Qualitative approach
For disciplines in which a qualitative format is preferential, including ethnography, humanistic geography or phenomenological psychology a varied approach to co
Document 2:::
Laboratory informatics is the specialized application of information technology aimed at optimizing and extending laboratory operations. It encompasses data acquisition (e.g. through sensors and hardware or voice), instrument interfacing, laboratory networking, data processing, specialized data management systems (such as a chromatography data system), a laboratory information management system, scientific data management (including data mining and data warehousing), and knowledge management (including the use of an electronic lab notebook). It has become more prevalent with the rise of other "informatics" disciplines such as bioinformatics, cheminformatics and health informatics. Several graduate programs are focused on some form of laboratory informatics, often with a clinical emphasis. A closely related - some consider subsuming - field is laboratory automation.
Capability Areas
In the context of Public Health Laboratories, the Association of Public Health Laboratories has identified 19 areas for self-assessment of laboratory informatics in their Laboratories Efficiencies Initiative. These include the following Capability Areas.
Laboratory Test Request and Sample Receiving
Test Preparation, LIMS Processing, Test Results Recording and Verification
Report Preparation and Distribution
Laboratory Test Scheduling
Prescheduled Testing
Specimen and Sample Tracking/Chain of Custody
Media, Reagents, Controls: Manufacturing and Inventory
Interoperability and Data Exchange
Statistical Analysis and Surveillance
Billing for Laboratory Services
Contract and Grant Management
Training, Education and Resource Management
Laboratory Certifications/Licensing
Customer Relationship Management
Quality Control (QC) and Quality Assurance (QA) Management
Laboratory Safety and Accident Investigation
Laboratory Mutual Assistance/Disaster Recovery
Core IT Service Management: Hardware, Software and Services
Policies and Procedures, including Budgeting and Funding
Sub-to
Document 3:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 4:::
The National Collection of Yeast Cultures (NCYC) is a British yeast culture collection based at the Norwich Research Park in Norwich, Norfolk, United Kingdom, that currently maintains a collection of over 4400 strains and operates under the Budapest Treaty.
As well as the traditional baking and brewing yeast Saccharomyces cerevisiae, this culture collection also contains hundreds of non-pathogenic yeast species. The yeasts are kept frozen under liquid nitrogen or freeze-dried in glass ampoules. To ensure the collection's safety, it is also duplicated and stored off site. Yeasts have been stored and revived successfully decades later.
History
NCYC were founded in 1948 when a group of British brewers, who later formed the Brewing Industry Research Foundation, decided to store their yeast cultures in a single, safe deposit to ensure their longevity.
In 1981 NCYC evolved into a broader collection when it moved to Institute of Food Research. in Norwich, in which it collected food spoilage yeast which was able to evade the conventional food preservatives.
In 1999, the collection became a part of The United Kingdom National Culture Collection (UKNCC)., which was established to co-ordinate the activities of Britain’s national collections of microbial organisms.
In 2019, the collection moved to the new facility in Quadram Institute Biosciences in the Norwich Research Park where it is currently based.
NCYC trades under QIB Extra Ltd, a wholly owned commercial subsidiary of the Quadram Institute Bioscience, based at the Quadram Institute that specialises in bespoke research services for the food, health and allied industries.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What method uses a retailer's coding method to catalog biological specimens in museums?
A. metallic barcoding
B. dna barcoding
C. content barcoding
D. structure barcoding
Answer:
|
|
sciq-2326
|
multiple_choice
|
The site of some nutrient absorption, the ileum is the third part of what digestive organ?
|
[
"rectum",
"stomach",
"large intestine",
"small intestine"
] |
D
|
Relavent Documents:
Document 0:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 1:::
The small intestine or small bowel is an organ in the gastrointestinal tract where most of the absorption of nutrients from food takes place. It lies between the stomach and large intestine, and receives bile and pancreatic juice through the pancreatic duct to aid in digestion. The small intestine is about long and folds many times to fit in the abdomen. Although it is longer than the large intestine, it is called the small intestine because it is narrower in diameter.
The small intestine has three distinct regions – the duodenum, jejunum, and ileum. The duodenum, the shortest, is where preparation for absorption through small finger-like protrusions called villi begins. The jejunum is specialized for the absorption through its lining by enterocytes: small nutrient particles which have been previously digested by enzymes in the duodenum. The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion that were not absorbed by the jejunum.
Structure
Size
The length of the small intestine can vary greatly, from as short as to as long as , also depending on the measuring technique used. The typical length in a living person is . The length depends both on how tall the person is and how the length is measured. Taller people generally have a longer small intestine and measurements are generally longer after death and when the bowel is empty.
It is approximately in diameter in newborns after 35 weeks of gestational age, and in diameter in adults. On abdominal X-rays, the small intestine is considered to be abnormally dilated when the diameter exceeds 3 cm. On CT scans, a diameter of over 2.5 cm is considered abnormally dilated. The surface area of the human small intestinal mucosa, due to enlargement caused by folds, villi and microvilli, averages .
Parts
The small intestine is divided into three structural parts.
The duodenum is a short structure ranging from in length, and shaped like a "C". It surrounds the head of t
Document 2:::
The large intestine, also known as the large bowel, is the last part of the gastrointestinal tract and of the digestive system in tetrapods. Water is absorbed here and the remaining waste material is stored in the rectum as feces before being removed by defecation. The colon is the longest portion of the large intestine, and the terms are often used interchangeably but most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the anal canal.
In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about long, which is about one-fifth of the whole length of the human gastrointestinal tract.
Structure
The colon of the large intestine is the last part of the digestive system. It has a segmented appearance due to a series of saccules called haustra. It extracts water and salt from solid wastes before they are eliminated from the body and is the site in which the fermentation of unabsorbed material by the gut microbiota occurs. Unlike the small intestine, the colon does not play a major role in absorption of foods and nutrients. About 1.5 litres or 45 ounces of water arrives in the colon each day.
The colon is the longest part of the large intestine and its average length in the adult human is 65 inches or 166 cm (range of 80 to 313 cm) for males, and 61 inches or 155 cm (range of 80 to 214 cm) for females.
Sections
In mammals, the large intestine consists of the cecum (including the appendix), colon (the longest part), rectum, and anal canal.
The four sections of the colon are: the ascending colon, transverse colon, desce
Document 3:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
Document 4:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The site of some nutrient absorption, the ileum is the third part of what digestive organ?
A. rectum
B. stomach
C. large intestine
D. small intestine
Answer:
|
|
sciq-8727
|
multiple_choice
|
What allowed life to expand and diversify during the early cambrian period?
|
[
"hot, dry climate",
"warm, humid climate",
"cool, humid climate",
"cool, dry climate"
] |
B
|
Relavent Documents:
Document 0:::
24-isopropyl cholestane is an organic molecule produced by specific sponges, protists and marine algae. The identification of this molecule at high abundances in Neoproterozoic rocks has been interpreted to reflect the presence of multicellular life prior to the rapid diversification and radiation of life during the Cambrian explosion. In this transitional period at the start of the Phanerozoic, single-celled organisms evolved to produce many of the evolutionary lineages present on Earth today. Interpreting 24-isopropyl cholestane in ancient rocks as indicating the presence of sponges before this rapid diversification event alters the traditional understanding of the evolution of multicellular life and the coupling of biology to changes in end-Neoproterozoic climate. However, there are several arguments against causally linking 24-isopropyl cholestane to sponges based on considerations of marine algae and the potential alteration of organic molecules over geologic time. In particular the discovery of 24-isopropyl cholestane in rhizarian protists implies that this biomarker cannot be used on its own to trace sponges. Interpreting the presence of 24-isopropyl cholestane in the context of changingglobal biogeochemical cycles at the Proterozoic-Phanerozoic transition remains an area of active research.
24-isopropyl cholestane
Chemical argument for Precambrian sponges
24-isopropyl cholestane (figure 1, left) is a C30 sterane with chemical formula C30H54 and molecular mass 414.76 g/mol. The molecule has a cholestane skeleton with an isopropyl moiety at C24 and is the geologically stable form of 24-isopropyl cholesterol. A related and important molecule is 24-n-propyl cholestane (figure 1, right), also with the cholestane skeleton, but with an n-propyl moiety at C24.
24-isopropyl cholestane is produced copiously by a particular group of sponges in the class Demospongiae within the phylum Porifera. Like other molecular fossils, the presence of 24-isopropyl cholestane in
Document 1:::
Until the late 1950s, the Precambrian was not believed to have hosted multicellular organisms. However, with radiometric dating techniques, it has been found that fossils initially found in the Ediacara Hills in Southern Australia date back to the late Precambrian. These fossils are body impressions of organisms shaped like disks, fronds and some with ribbon patterns that were most likely tentacles.
These are the earliest multicellular organisms in Earth's history, despite the fact that unicellularity had been around for a long time before that. The requirements for multicellularity were embedded in the genes of some of these cells, specifically choanoflagellates. These are thought to be the precursors for all animals. They are highly related to sponges (Porifera), which are the simplest multicellular animals.
In order to understand the transition to multicellularity during the Precambrian, it is important to look at the requirements for multicellularity—both biological and environmental.
Precambrian
The Precambrian dates from the beginning of Earth's formation (4.6 billion years ago) to the beginning of the Cambrian Period, 539 million years ago. The Precambrian consists of the Hadean, Archaean and Proterozoic eons. Specifically, this article examines the Ediacaran, when the first multicellular bodies are believed to have arisen, as well as what caused the rise of multicellularity. This time period arose after the Snowball Earth of the mid Neoproterozoic. The "Snowball Earth" was a period of worldwide glaciation, which is believed to have served as a population bottleneck for the subsequent evolution of multicellular organisms.
Precambrian bodies
The Earth formed around 4.6 billion years ago, with unicellular life emerging somewhat later after the cessation of the Late Heavy Bombardment, a period of intense asteroid impacts possibly caused by migration of the gas giant planets to their current orbits, however multicellularity and bodies are a relatively rece
Document 2:::
The Silurian-Devonian Terrestrial Revolution, also known as the Devonian Plant Explosion (DePE) and the Devonian explosion, was a period of rapid plant and fungal diversification that occurred 428 to 359 million years ago during the Silurian and Devonian, with the most critical phase occurring during the Late Silurian and Early Devonian. This diversification of terrestrial plant life had vast impacts on the biotic composition of earth's soil, its atmosphere, its oceans, and for all plant and animal life that would follow it. Through fierce competition for light and available space on land, phenotypic diversity of plants increased greatly, comparable in scale and effect to the explosion in diversity of animal life during the Cambrian explosion, especially in vertical plant growth, which allowed for photoautotrophic canopies to develop, and forever altering plant evolutionary floras that followed. As plants evolved and radiated, so too did arthropods, which formed symbiotic relationships with them. This Silurian and Devonian flora was significantly different in appearance, reproduction, and anatomy to most modern flora. Much of this flora had died out in extinction events including the Kellwasser Event, the Hangenberg Event, the Carboniferous Rainforest Collapse, and the End-Permian Extinction.
Silurian and Devonian life
Rather than plants, it was fungi, in particular nematophytes such as Prototaxites, that dominated the early stages of this terrestrial biodiversification event. Nematophytes towered over even the largest land plants during the Silurian and Early Devonian, only being truly surpassed in size in the Early Carboniferous. The nutrient-distributing glomeromycotan mycorrhizal networks of nematophytes were very likely to have acted as facilitators for the expansion of plants into terrestrial environments, which followed the colonising fungi. The first fossils of arbuscular mycorrhizae, a type of symbiosis between fungi and vascular plants, are known from th
Document 3:::
The Cambrian explosion, Cambrian radiation, Cambrian diversification, or the Biological Big Bang refers to an interval of time approximately in the Cambrian Period of early Paleozoic when there was a sudden radiation of complex life and practically all major animal phyla started appearing in the fossil record. It lasted for about 13 – 25 million years and resulted in the divergence of most modern metazoan phyla. The event was accompanied by major diversification in other groups of organisms as well.
Before early Cambrian diversification, most organisms were relatively simple, composed of individual cells, or small multicellular organisms, occasionally organized into colonies. As the rate of diversification subsequently accelerated, the variety of life became much more complex, and began to resemble that of today. Almost all present-day animal phyla appeared during this period, including the earliest chordates.
A 2019 paper suggests that the timing should be expanded back to include the late Ediacaran, where another diverse soft-bodied biota existed and possibly persisted into the Cambrian, rather than just the narrower timeframe of the "Cambrian Explosion" event visible in the fossil record, based on analysis of chemicals that would have laid the building blocks for a progression of transitional radiations starting with the Ediacaran period and continuing at a similar rate into the Cambrian.
History and significance
The seemingly rapid appearance of fossils in the "Primordial Strata" was noted by William Buckland in the 1840s, and in his 1859 book On the Origin of Species, Charles Darwin discussed the then-inexplicable lack of earlier fossils as one of the main difficulties for his theory of descent with slow modification through natural selection. The long-running puzzlement about the seemingly-sudden appearance of the Cambrian fauna without evident precursor(s) centers on three key points: whether there really was a mass diversification of complex organisms
Document 4:::
The Ediacaran (; formerly Vendian) biota is a taxonomic period classification that consists of all life forms that were present on Earth during the Ediacaran Period (). These were enigmatic tubular and frond-shaped, mostly sessile, organisms. Trace fossils of these organisms have been found worldwide, and represent the earliest known complex multicellular organisms. The term "Ediacara biota" has received criticism from some scientists due to its alleged inconsistency, arbitrary exclusion of certain fossils, and inability to be precisely defined.
The Ediacaran biota may have undergone evolutionary radiation in a proposed event called the Avalon explosion, . This was after the Earth had thawed from the Cryogenian period's extensive glaciation. This biota largely disappeared with the rapid increase in biodiversity known as the Cambrian explosion. Most of the currently existing body plans of animals first appeared in the fossil record of the Cambrian rather than the Ediacaran. For macroorganisms, the Cambrian biota appears to have almost completely replaced the organisms that dominated the Ediacaran fossil record, although relationships are still a matter of debate.
The organisms of the Ediacaran Period first appeared around and flourished until the cusp of the Cambrian , when the characteristic communities of fossils vanished. A diverse Ediacaran community was discovered in 1995 in Sonora, Mexico, and is approximately 555 million years in age, roughly coeval with Ediacaran fossils of the Ediacara Hills in South Australia and the White Sea on the coast of Russia. While rare fossils that may represent survivors have been found as late as the Middle Cambrian (510–500 Mya), the earlier fossil communities disappear from the record at the end of the Ediacaran leaving only curious fragments of once-thriving ecosystems. Multiple hypotheses exist to explain the disappearance of this biota, including preservation bias, a changing environment, the advent of predators and compe
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What allowed life to expand and diversify during the early cambrian period?
A. hot, dry climate
B. warm, humid climate
C. cool, humid climate
D. cool, dry climate
Answer:
|
|
sciq-6259
|
multiple_choice
|
What two types of communication do both humans and birds use primarily?
|
[
"sensual and auditory",
"interaction and auditory",
"visual and auditory",
"material and auditory"
] |
C
|
Relavent Documents:
Document 0:::
Animal communication is the transfer of information from one or a group of animals (sender or senders) to one or more other animals (receiver or receivers) that affects the current or future behavior of the receivers. Information may be sent intentionally, as in a courtship display, or unintentionally, as in the transfer of scent from predator to prey with kairomones. Information may be transferred to an "audience" of several receivers. Animal communication is a rapidly growing area of study in disciplines including animal behavior, sociology, neurology and animal cognition. Many aspects of animal behavior, such as symbolic name use, emotional expression, learning and sexual behavior, are being understood in new ways.
When the information from the sender changes the behavior of a receiver, the information is referred to as a "signal". Signalling theory predicts that for a signal to be maintained in the population, both the sender and receiver should usually receive some benefit from the interaction. Signal production by senders and the perception and subsequent response of receivers are thought to coevolve. Signals often involve multiple mechanisms, e.g. both visual and auditory, and for a signal to be understood the coordinated behaviour of both sender and receiver require careful study.
Animal languages
The sounds animals make are important because they communicate the animals' state. Some animals species have been taught simple versions of human languages. Animals can use, for example, electrolocation and echolocation to communicate about prey and location. Keski-Korsu suggests a challenge of human/animal communication is that humans don't recognize animals as self aware and deliberately communicating.
Modes
Visual
Gestures: Most animals understand communication through a visual display of distinctive body parts or bodily movements. Animals will reveal or accentuate a body part to relay certain information. The parent herring gull displays its bright yell
Document 1:::
Communication occurs when an animal produces a signal and uses it to influences the behaviour of another animal. A signal can be any behavioural, structural or physiological trait that has evolved specifically to carry information about the sender and/or the external environment and to stimulate the sensory system of the receiver to change their behaviour. A signal is different from a cue in that cues are informational traits that have not been selected for communication purposes. For example, if an alerted bird gives a warning call to a predator and causes the predator to give up the hunt, the bird is using the sound as a signal to communicate its awareness to the predator. On the other hand, if a rat forages in the leaves and makes a sound that attracts a predator, the sound itself is a cue and the interaction is not considered a communication attempt.
Air and water have different physical properties which lead to different velocity and clarity of the signal transmission process during communication. This means that common understanding of communication mechanisms and structures of terrestrial animals cannot be applied to aquatic animals. For example, a horse can sniff the air to detect pheromones but a fish which is surrounded by water will need a different method to detect chemicals.
Aquatic animals can communicate through various signal modalities including visual, auditory, tactile, chemical and electrical signals. Communication using any of these forms requires specialised signal producing and detecting organs. Thus, the structure, distribution and mechanism of these sensory systems vary amongst different classes and species of aquatic animals and they also differ greatly to those of terrestrial animals.
The basic functions of communication in aquatic animals are similar to those of terrestrial animals. In general, communication can be used to facilitate social recognition and aggregation, to locate, attract and evaluate mating partners and to engage in te
Document 2:::
Passerine birds produce song through the vocal organ, the syrinx, which is composed of bilaterally symmetric halves located where the trachea separates into the two bronchi. Using endoscopic techniques, it has been observed that song is produced by air passing between a set of medial and lateral labia on each side of the syrinx. Song is produced bilaterally, in both halves, through each separate set of labia unless air is prevented from flowing through one side of the syrinx. Birds regulate the airflow through the syrinx with muscles—M. syringealis dorsalis and M. tracheobronchialis dorsalis—that control the medial and lateral labia in the syrinx, whose action may close off airflow. Song may, hence, be produced unilaterally through one side of the syrinx when the labia are closed in the opposite side.
Early experiments discover lateralization
Lateral dominance of the hypoglossal nerve conveying messages from the brain to the syrinx was first observed in the 1970s. This lateral dominance was determined in a breed of canary, the waterschlager canary, bred for its long and complex song, by lesioning the ipsilateral tracheosyringeal branch of the hypoglossal nerve, disabling either the left or right syrinx. The numbers of song elements in the birds’ repertoires were greatly attenuated when the left side was cut, but only modestly attenuated when the right side was disabled, indicating left syringeal dominance of song production in these canaries. Similar lateralized effects have been observed in other species such as the white-crowned sparrow (Zonotrichia leucophrys), the Java sparrow (Lonchura oryzivora) and the zebra finch (Taeniopygia guttata), which is right-side dominant. However, denervation in these birds does not entirely silence the affected syllables but creates qualitative changes in phonology and frequency.
Respiratory control and neurophysiology
In waterslager canaries, which produce most syllables using the left syrinx, as soon as a unilaterally produced
Document 3:::
The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli.
The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies.
Studies
Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities.
Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6.
Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural Histo
Document 4:::
Structures built by non-human animals, often called animal architecture, are common in many species. Examples of animal structures include termite mounds, ant hills, wasp and beehives, burrow complexes, beaver dams, elaborate nests of birds, and webs of spiders.
Often, these structures incorporate sophisticated features such as temperature regulation, traps, bait, ventilation, special-purpose chambers and many other features. They may be created by individuals or complex societies of social animals with different forms carrying out specialized roles. These constructions may arise from complex building behaviour of animals such as in the case of night-time nests for chimpanzees, from inbuilt neural responses, which feature prominently in the construction of bird songs, or triggered by hormone release as in the case of domestic sows, or as emergent properties from simple instinctive responses and interactions, as exhibited by termites, or combinations of these. The process of building such structures may involve learning and communication, and in some cases, even aesthetics. Tool use may also be involved in building structures by animals.
Building behaviour is common in many non-human mammals, birds, insects and arachnids. It is also seen in a few species of fish, reptiles, amphibians, molluscs, urochordates, crustaceans, annelids and some other arthropods. It is virtually absent from all the other animal phyla.
Functions
Animals create structures primarily for three reasons:
to create protected habitats, i.e. homes.
to catch prey and for foraging, i.e. traps.
for communication between members of the species (intra-specific communication), i.e. display.
Animals primarily build habitat for protection from extreme temperatures and from predation. Constructed structures raise physical problems which need to be resolved, such as humidity control or ventilation, which increases the complexity of the structure. Over time, through evolution, animals use shelters for ot
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two types of communication do both humans and birds use primarily?
A. sensual and auditory
B. interaction and auditory
C. visual and auditory
D. material and auditory
Answer:
|
|
sciq-2079
|
multiple_choice
|
The highest point of a wave is called?
|
[
"crest",
"threshold",
"surge",
"frequency"
] |
A
|
Relavent Documents:
Document 0:::
A crest point on a wave is the maximum value of upward displacement within a cycle. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point in a cycle.
When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude.
See also
Crest factor
Superposition principle
Wave
Document 1:::
In fluid dynamics, the wave height of a surface wave is the difference between the elevations of a crest and a neighboring trough. Wave height is a term used by mariners, as well as in coastal, ocean and naval engineering.
At sea, the term significant wave height is used as a means to introduce a well-defined and standardized statistic to denote the characteristic height of the random waves in a sea state, including wind sea and swell. It is defined in such a way that it more or less corresponds to what a mariner observes when estimating visually the average wave height.
Definitions
Depending on context, wave height may be defined in different ways:
For a sine wave, the wave height H is twice the amplitude (i.e., the peak-to-peak amplitude):
For a periodic wave, it is simply the difference between the maximum and minimum of the surface elevation : with cp the phase speed (or propagation speed) of the wave. The sine wave is a specific case of a periodic wave.
In random waves at sea, when the surface elevations are measured with a wave buoy, the individual wave height Hm of each individual wave—with an integer label m, running from 1 to N, to denote its position in a sequence of N waves—is the difference in elevation between a wave crest and trough in that wave. For this to be possible, it is necessary to first split the measured time series of the surface elevation into individual waves. Commonly, an individual wave is denoted as the time interval between two successive downward-crossings through the average surface elevation (upward crossings might also be used). Then the individual wave height of each wave is again the difference between maximum and minimum elevation in the time interval of the wave under consideration.
Significant wave height
RMS wave height
Another wave-height statistic in common usage is the root-mean-square (or RMS) wave height Hrms, defined as: with Hm again denoting the individual wave heights in a certain time series.
See also
Se
Document 2:::
Wave loading is most commonly the application of a pulsed or wavelike load to a material or object. This is most commonly used in the analysis of piping, ships, or building structures which experience wind, water, or seismic disturbances.
Examples of wave loading
Offshore storms and pipes: As large waves pass over shallowly buried pipes, water pressure increases above it. As the trough approaches, pressure over the pipe drops and this sudden and repeated variation in pressure can break pipes. The difference in pressure for a wave with wave height of about 10 m would be equivalent to one atmosphere (101.3 kPa or 14.7 psi) pressure variation between crest and trough and repeated fluctuations over pipes in relatively shallow environments could set up resonance vibrations within pipes or structures and cause problems.
Engineering oil platforms: The effects of wave-loading are a serious issue for engineers designing oil platforms, which must contend with the effects of wave loading, and have devised a number of algorithms to do so.
Document 3:::
A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis.
In the atmospheric sciences, these plots are a common way to visualize atmospheric waves.
In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design.
Origins
In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is:
Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as
On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram:
Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc.
See also
Dispersion relation
Document 4:::
In physical oceanography, the significant wave height (SWH, HTSGW or Hs)
is defined traditionally as the mean wave height (trough to crest) of the highest third of the waves (H1/3). It is usually defined as four times the standard deviation of the surface elevation – or equivalently as four times the square root of the zeroth-order moment (area) of the wave spectrum. The symbol Hm0 is usually used for that latter definition. The significant wave height (Hs) may thus refer to Hm0 or H1/3; the difference in magnitude between the two definitions is only a few percent.
SWH is used to characterize sea state, including winds and swell.
Origin and definition
The original definition resulted from work by the oceanographer Walter Munk during World War II. The significant wave height was intended to mathematically express the height estimated by a "trained observer". It is commonly used as a measure of the height of ocean waves.
Time domain definition
Significant wave height H1/3, or Hs or Hsig, as determined in the time domain, directly from the time series of the surface elevation, is defined as the average height of that one-third of the N measured waves having the greatest heights: where Hm represents the individual wave heights, sorted into descending order of height as m increases from 1 to N. Only the highest one-third is used, since this corresponds best with visual observations of experienced mariners, whose vision apparently focuses on the higher waves.
Frequency domain definition
Significant wave height Hm0, defined in the frequency domain, is used both for measured and forecasted wave variance spectra. Most easily, it is defined in terms of the variance m0 or standard deviation ση of the surface elevation: where m0, the zeroth-moment of the variance spectrum, is obtained by integration of the variance spectrum. In case of a measurement, the standard deviation ση is the easiest and most accurate statistic to be used.
Another wave-height statistic in common u
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The highest point of a wave is called?
A. crest
B. threshold
C. surge
D. frequency
Answer:
|
|
sciq-8701
|
multiple_choice
|
Gas particles are constantly colliding with each other and the walls of a container, and these collisions are elastic, so there is no net loss of what?
|
[
"energy",
"temperature",
"velocity",
"heat"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 2:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Gas particles are constantly colliding with each other and the walls of a container, and these collisions are elastic, so there is no net loss of what?
A. energy
B. temperature
C. velocity
D. heat
Answer:
|
|
sciq-2892
|
multiple_choice
|
Are the joints between the vertebrae contained in your backbone fully movable, partially movable, or unmovable?
|
[
"unmovable",
"partially movable",
"fully movable",
"none of these"
] |
B
|
Relavent Documents:
Document 0:::
Each vertebra (: vertebrae) is an irregular bone with a complex structure composed of bone and some hyaline cartilage, that make up the vertebral column or spine, of vertebrates. The proportions of the vertebrae differ according to their spinal segment and the particular species.
The basic configuration of a vertebra varies; the bone is the body, and the central part of the body
is the centrum. The upper and lower surfaces of the vertebra body give attachment to the intervertebral discs. The posterior part of a vertebra forms a vertebral arch, in eleven parts, consisting of two pedicles (pedicle of vertebral arch), two laminae, and seven processes. The laminae give attachment to the ligamenta flava (ligaments of the spine). There are vertebral notches formed from the shape of the pedicles, which form the intervertebral foramina when the vertebrae articulate. These foramina are the entry and exit conduits for the spinal nerves. The body of the vertebra and the vertebral arch form the vertebral foramen, the larger, central opening that accommodates the spinal canal, which encloses and protects the spinal cord.
Vertebrae articulate with each other to give strength and flexibility to the spinal column, and the shape at their back and front aspects determines the range of movement. Structurally, vertebrae are essentially alike across the vertebrate species, with the greatest difference seen between an aquatic animal and other vertebrate animals. As such, vertebrates take their name from the vertebrae that compose the vertebral column.
Structure
General structure
In the human vertebral column the size of the vertebrae varies according to placement in the vertebral column, spinal loading, posture and pathology. Along the length of the spine the vertebrae change to accommodate different needs related to stress and mobility. Each vertebra is an irregular bone.
Every vertebra has a body (vertebral body), which consists of a large anterior middle portion called the cen
Document 1:::
The lumbar trunks are formed by the union of the efferent vessels from the lateral aortic lymph nodes.
They receive the lymph from the lower limbs, from the walls and viscera of the pelvis, from the kidneys and suprarenal glands and the deep lymphatics of the greater part of the abdominal wall.
Ultimately, the lumbar trunks empty into the cisterna chyli, a dilatation at the beginning of the thoracic duct.
Document 2:::
In vertebrates, thoracic vertebrae compose the middle segment of the vertebral column, between the cervical vertebrae and the lumbar vertebrae. In humans, there are twelve thoracic vertebrae and they are intermediate in size between the cervical and lumbar vertebrae; they increase in size going towards the lumbar vertebrae, with the lower ones being much larger than the upper. They are distinguished by the presence of facets on the sides of the bodies for articulation with the heads of the ribs, as well as facets on the transverse processes of all, except the eleventh and twelfth, for articulation with the tubercles of the ribs. By convention, the human thoracic vertebrae are numbered T1–T12, with the first one (T1) located closest to the skull and the others going down the spine toward the lumbar region.
General characteristics
These are the general characteristics of the second through eighth thoracic vertebrae. The first and ninth through twelfth vertebrae contain certain peculiarities, and are detailed below.
The bodies in the middle of the thoracic region are heart-shaped and as broad in the anteroposterior as in the transverse direction. At the ends of the thoracic region they resemble respectively those of the cervical and lumbar vertebrae. They are slightly thicker behind than in front, flat above and below, convex from side to side in front, deeply concave behind, and slightly constricted laterally and in front. They present, on either side, two costal demi-facets, one above, near the root of the pedicle, the other below, in front of the inferior vertebral notch; these are covered with cartilage in the fresh state, and, when the vertebrae are articulated with one another, form, with the intervening intervertebral fibrocartilages, oval surfaces for the reception of the heads of the ribs.
The pedicles are directed backward and slightly upward, and the inferior vertebral notches are of large size, and deeper than in any other region of the vertebral column
Document 3:::
In anatomy, the atlas (C1) is the most superior (first) cervical vertebra of the spine and is located in the neck.
The bone is named for Atlas of Greek mythology, for just as Atlas bore the weight of the heavens, the first cervical vertebra supports the head. However, the term atlas was first used by the ancient Romans for the seventh cervical vertebra (C7) due to its suitability for supporting burdens. In Greek mythology, Atlas was condemned to bear the weight of the heavens as punishment for rebelling against Zeus. Ancient depictions of Atlas show the globe of the heavens resting at the base of his neck, on C7. Sometime around 1522, anatomists decided to call the first cervical vertebra the atlas. Scholars believe that by switching the designation atlas from the seventh to the first cervical vertebra Renaissance anatomists were commenting that the point of man’s burden had shifted from his shoulders to his head--that man’s true burden was not a physical load, but rather, his mind.
The atlas is the topmost vertebra and the axis (the vertebra below it) forms the joint connecting the skull and spine. The atlas and axis are specialized to allow a greater range of motion than normal vertebrae. They are responsible for the nodding and rotation movements of the head.
The atlanto-occipital joint allows the head to nod up and down on the vertebral column. The dens acts as a pivot that allows the atlas and attached head to rotate on the axis, side to side.
The atlas's chief peculiarity is that it has no body, which has fused with the next vertebra. It is ring-like and consists of an anterior and a posterior arch and two lateral masses.
The atlas and axis are important neurologically because the brainstem extends down to the axis.
Structure
Anterior arch
The anterior arch forms about one-fifth of the ring: its anterior surface is convex, and presents at its center the anterior tubercle for the attachment of the Longus colli muscles and the anterior longitudinal ligam
Document 4:::
The costovertebral joints are the joints that connect the ribs to the vertebral column.
The articulation of the head of rib connects the head of the rib and the bodies of vertebrae.
The costotransverse joint connects the rib with the transverse processes of vertebrae.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Are the joints between the vertebrae contained in your backbone fully movable, partially movable, or unmovable?
A. unmovable
B. partially movable
C. fully movable
D. none of these
Answer:
|
|
ai2_arc-869
|
multiple_choice
|
Which is a chemical change?
|
[
"Element 1 is hammered into a thin sheet.",
"Element 2 is heated and turns into a liquid.",
"Element 3 turns a greenish color as it sits in air.",
"Element 4 is ground up into a fine, slippery powder."
] |
C
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
Document 3:::
A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation.
Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic.
The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere.
The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production.
Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals.
Definition and applicable elements
Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise
Document 4:::
In chemistry, a chemical transport reaction describes a process for purification and crystallization of non-volatile solids. The process is also responsible for certain aspects of mineral growth from the effluent of volcanoes. The technique is distinct from chemical vapor deposition, which usually entails decomposition of molecular precursors and which gives conformal coatings.
The technique, which was popularized by Harald Schäfer, entails the reversible conversion of nonvolatile elements and chemical compounds into volatile derivatives. The volatile derivative migrates throughout a sealed reactor, typically a sealed and evacuated glass tube heated in a tube furnace. Because the tube is under a temperature gradient, the volatile derivative reverts to the parent solid and the transport agent is released at the end opposite to which it originated (see next section). The transport agent is thus catalytic. The technique requires that the two ends of the tube (which contains the sample to be crystallized) be maintained at different temperatures. So-called two-zone tube furnaces are employed for this purpose. The method derives from the Van Arkel de Boer process which was used for the purification of titanium and vanadium and uses iodine as the transport agent.
Cases of the exothermic and endothermic reactions of the transporting agent
Transport reactions are classified according to the thermodynamics of the reaction between the solid and the transporting agent. When the reaction is exothermic, then the solid of interest is transported from the cooler end (which can be quite hot) of the reactor to a hot end, where the equilibrium constant is less favorable and the crystals grow. The reaction of molybdenum dioxide with the transporting agent iodine is an exothermic process, thus the MoO2 migrates from the cooler end (700 °C) to the hotter end (900 °C):
MoO2 + I2 MoO2I2 ΔHrxn < 0 (exothermic)
Using 10 milligrams of iodine for 4 grams of the solid, the proc
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which is a chemical change?
A. Element 1 is hammered into a thin sheet.
B. Element 2 is heated and turns into a liquid.
C. Element 3 turns a greenish color as it sits in air.
D. Element 4 is ground up into a fine, slippery powder.
Answer:
|
|
sciq-4404
|
multiple_choice
|
What two gases are the main components of air?
|
[
"nitrogen and oxygen",
"hydrogen and helium",
"carbon and oxygen",
"oxygen and helium"
] |
A
|
Relavent Documents:
Document 0:::
The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084
Oxygen 20.9476
Argon Ar 0.934
Carbon Dioxide 0.0314
Gas composition of air
To give a familiar example, air has a composition of:
Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass.
It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state.
The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air:
ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1.
GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote.
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
An atmosphere () is a layer of gas or layers of gases that envelop a planet, and is held in place by the gravity of the planetary body. A planet retains an atmosphere when the gravity is great and the temperature of the atmosphere is low. A stellar atmosphere is the outer region of a star, which includes the layers above the opaque photosphere; stars of low temperature might have outer atmospheres containing compound molecules.
The atmosphere of Earth is composed of nitrogen (78 %), oxygen (21 %), argon (0.9 %), carbon dioxide (0.04 %) and trace gases. Most organisms use oxygen for respiration; lightning and bacteria perform nitrogen fixation to produce ammonia that is used to make nucleotides and amino acids; plants, algae, and cyanobacteria use carbon dioxide for photosynthesis. The layered composition of the atmosphere minimises the harmful effects of sunlight, ultraviolet radiation, solar wind, and cosmic rays to protect organisms from genetic damage. The current composition of the atmosphere of the Earth is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms.
Composition
The initial gaseous composition of an atmosphere is determined by the chemistry and temperature of the local solar nebula from which a planet is formed, and the subsequent escape of some gases from the interior of the atmosphere proper. The original atmosphere of the planets originated from a rotating disc of gases, which collapsed onto itself and then divided into a series of spaced rings of gas and matter that, which later condensed to form the planets of the Solar System. The atmospheres of the planets Venus and Mars are principally composed of carbon dioxide and nitrogen, argon and oxygen.
The composition of Earth's atmosphere is determined by the by-products of the life that it sustains. Dry air (mixture of gases) from Earth's atmosphere contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and traces of hydrogen,
Document 3:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two gases are the main components of air?
A. nitrogen and oxygen
B. hydrogen and helium
C. carbon and oxygen
D. oxygen and helium
Answer:
|
|
sciq-7366
|
multiple_choice
|
Earthworms are important deposit feeders that help form and enrich what material?
|
[
"coal",
"shale",
"soil",
"wood"
] |
C
|
Relavent Documents:
Document 0:::
An earthworm is a soil-dwelling terrestrial invertebrate that belongs to the phylum Annelida. The term is the common name for the largest members of the class (or subclass, depending on the author) Oligochaeta. In classical systems, they were in the order of Opisthopora since the male pores opened posterior to the female pores, although the internal male segments are anterior to the female. Theoretical cladistic studies have placed them in the suborder Lumbricina of the order Haplotaxida, but this may change. Other slang names for earthworms include "dew-worm", "rainworm", "nightcrawler", and "angleworm" (from its use as angling hookbaits). Larger terrestrial earthworms are also called megadriles (which translates to "big worms") as opposed to the microdriles ("small worms") in the semiaquatic families Tubificidae, Lumbricidae and Enchytraeidae. The megadriles are characterized by a distinct clitellum (more extensive than that of microdriles) and a vascular system with true capillaries.
Earthworms are commonly found in moist, compost-rich soil, eating a wide variety of organic matters, which include detritus, living protozoa, rotifers, nematodes, bacteria, fungi and other microorganisms. An earthworm's digestive system runs the length of its body. They are one of nature's most important detritivores and coprophages, and also serve as food for many low-level consumers within the ecosystems.
Earthworms exhibit an externally segmented tube-within-a-tube body plan with corresponding internal segmentations, and usually have setae on all segments. They have a cosmopolitan distribution wherever soil, water and temperature conditions allow. They have a double transport system made of coelomic fluid that moves within the fluid-filled coelom and a simple, closed circulatory system, and respires (breathes) via cutaneous respiration. As soft-bodied invertebrates, they lack a true skeleton, but their structure is maintained by fluid-filled coelom chambers that function as a h
Document 1:::
USDA soil taxonomy (ST) developed by the United States Department of Agriculture and the National Cooperative Soil Survey provides an elaborate classification of soil types according to several parameters (most commonly their properties) and in several levels: Order, Suborder, Great Group, Subgroup, Family, and Series. The classification was originally developed by Guy Donald Smith, former director of the U.S. Department of Agriculture's soil survey investigations.
Discussion
A taxonomy is an arrangement in a systematic manner; the USDA soil taxonomy has six levels of classification. They are, from most general to specific: order, suborder, great group, subgroup, family and series. Soil properties that can be measured quantitatively are used in this classification system – they include: depth, moisture, temperature, texture, structure, cation exchange capacity, base saturation, clay mineralogy, organic matter content and salt content. There are 12 soil orders (the top hierarchical level) in soil taxonomy. The names of the orders end with the suffix -sol. The criteria for the different soil orders include properties that reflect major differences in the genesis of soils. The orders are:
Alfisol – soils with aluminium and iron. They have horizons of clay accumulation, and form where there is enough moisture and warmth for at least three months of plant growth. They constitute 10% of soils worldwide.
Andisol – volcanic ash soils. They are young soils. They cover 1% of the world's ice-free surface.
Aridisol – dry soils forming under desert conditions which have fewer than 90 consecutive days of moisture during the growing season and are nonleached. They include nearly 12% of soils on Earth. Soil formation is slow, and accumulated organic matter is scarce. They may have subsurface zones of caliche or duripan. Many aridisols have well-developed Bt horizons showing clay movement from past periods of greater moisture.
Entisol – recently formed soils that lack well-d
Document 2:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 3:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 4:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Earthworms are important deposit feeders that help form and enrich what material?
A. coal
B. shale
C. soil
D. wood
Answer:
|
|
sciq-787
|
multiple_choice
|
What part of the body are eggs formed in?
|
[
"pancreas",
"ovaries",
"intestine",
"brain"
] |
B
|
Relavent Documents:
Document 0:::
Chickens (Gallus gallus domesticus) and their eggs have been used extensively as research models throughout the history of biology. Today they continue to serve as an important model for normal human biology as well as pathological disease processes.
History
Chicken embryos as a research model
Human fascination with the chicken and its egg are so deeply rooted in history that it is hard to say exactly when avian exploration began. As early as 1400 BCE, ancient Egyptians artificially incubated chicken eggs to propagate their food supply. The developing chicken in the egg first appears in written history after catching the attention of the famous Greek philosopher, Aristotle, around 350 BCE. As Aristotle opened chicken eggs at various time points of incubation, he noted how the organism changed over time. Through his writing of Historia Animalium, he introduced some of the earliest studies of embryology based on his observations of the chicken in the egg.
Aristotle recognized significant similarities between human and chicken development. From his studies of the developing chick, he was able to correctly decipher the role of the placenta and umbilical cord in the human.
Chick research of the 16th century significantly modernized ideas about human physiology. European scientists, including Ulisse Aldrovandi, Volcher Cotier and William Harvey, used the chick to demonstrate tissue differentiation, disproving the widely held belief of the time that organisms are "preformed" in their adult version and only grow larger during development. Distinct tissue areas were recognized that grew and gave rise to specific structures, including the blastoderm, or chick origin. Harvey also closely watched the development of the heart and blood and was the first to note the directional flow of blood between veins and arteries. The relatively large size of the chick as a model organism allowed scientists during this time to make these significant observations without the hel
Document 1:::
The germinal epithelium is the epithelial layer of the seminiferous tubules of the testicles. It is also known as the wall of the seminiferous tubules. The cells in the epithelium are connected via tight junctions.
There are two types of cells in the germinal epithelium. The large Sertoli cells (which are not dividing) function as supportive cells to the developing sperm. The second cell type are the cells belonging to the spermatogenic cell lineage. These develop to eventually become sperm cells (spermatozoon). Typically, the spermatogenic cells will make four to eight layers in the germinal epithelium.
Document 2:::
This is a list of cells in humans derived from the three embryonic germ layers – ectoderm, mesoderm, and endoderm.
Cells derived from ectoderm
Surface ectoderm
Skin
Trichocyte
Keratinocyte
Anterior pituitary
Gonadotrope
Corticotrope
Thyrotrope
Somatotrope
Lactotroph
Tooth enamel
Ameloblast
Neural crest
Peripheral nervous system
Neuron
Glia
Schwann cell
Satellite glial cell
Neuroendocrine system
Chromaffin cell
Glomus cell
Skin
Melanocyte
Nevus cell
Merkel cell
Teeth
Odontoblast
Cementoblast
Eyes
Corneal keratocyte
Neural tube
Central nervous system
Neuron
Glia
Astrocyte
Ependymocytes
Muller glia (retina)
Oligodendrocyte
Oligodendrocyte progenitor cell
Pituicyte (posterior pituitary)
Pineal gland
Pinealocyte
Cells derived from mesoderm
Paraxial mesoderm
Mesenchymal stem cell
Osteochondroprogenitor cell
Bone (Osteoblast → Osteocyte)
Cartilage (Chondroblast → Chondrocyte)
Myofibroblast
Fat
Lipoblast → Adipocyte
Muscle
Myoblast → Myocyte
Myosatellite cell
Tendon cell
Cardiac muscle cell
Other
Fibroblast → Fibrocyte
Other
Digestive system
Interstitial cell of Cajal
Intermediate mesoderm
Renal stem cell
Angioblast → Endothelial cell
Mesangial cell
Intraglomerular
Extraglomerular
Juxtaglomerular cell
Macula densa cell
Stromal cell → Interstitial cell → Telocytes
Simple epithelial cell → Podocyte
Kidney proximal tubule brush border cell
Reproductive system
Sertoli cell
Leydig cell
Granulosa cell
Peg cell
Germ cells (which migrate here primordially)
spermatozoon
ovum
Lateral plate mesoderm
Hematopoietic stem cell
Lymphoid
Lymphoblast
see lymphocytes
Myeloid
CFU-GEMM
see myeloid cells
Circulatory system
Endothelial progenitor cell
Endothelial colony forming cell
Endothelial stem cell
Angioblast/Mesoangioblast
Pericyte
Mural cell
Document 3:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 4:::
In biology, a blastomere is a type of cell produced by cell division (cleavage) of the zygote after fertilization; blastomeres are an essential part of blastula formation, and blastocyst formation in mammals.
Human blastomere characteristics
In humans, blastomere formation begins immediately following fertilization and continues through the first week of embryonic development. About 90 minutes after fertilization, the zygote divides into two cells. The two-cell blastomere state, present after the zygote first divides, is considered the earliest mitotic product of the fertilized oocyte. These mitotic divisions continue and result in a grouping of cells called blastomeres. During this process, the total size of the embryo does not increase, so each division results in smaller and smaller cells. When the zygote contains 16 to 32 blastomeres it is referred to as a morula. These are the preliminary stages in the embryo beginning to form. Once this begins, microtubules within the morula's cytosolic material in the blastomere cells can develop into important membrane functions, such as sodium pumps. These pumps allow the inside of the embryo to fill with blastocoelic fluid, which supports the further growth of life.
The blastomere is considered totipotent; that is, blastomeres are capable of developing from a single cell into a fully fertile adult organism. This has been demonstrated through studies and conjectures made with mouse blastomeres, which have been accepted as true for most mammalian blastomeres as well. Studies have analyzed monozygotic twin mouse blastomeres in their two-cell state, and have found that when one of the twin blastomeres is destroyed, a fully fertile adult mouse can still develop. Thus, it can be assumed that since one of the twin cells was totipotent, the destroyed one originally was as well.
Relative blastomere size within the embryo is dependent not only on the stage of the cleavage, but also on the regularity of the cleavage amongst t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What part of the body are eggs formed in?
A. pancreas
B. ovaries
C. intestine
D. brain
Answer:
|
|
sciq-5076
|
multiple_choice
|
Coal is a solid hydrocarbon formed from what type of decaying material?
|
[
"insects",
"plant",
"soil",
"mammals"
] |
B
|
Relavent Documents:
Document 0:::
Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay.
The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.
Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone.
The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions.
When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel.
In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte
Document 1:::
Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound.
The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields.
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Retene, methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C18H18, is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods. It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid. Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid. It forms a picrate that melts at 123-124 °C.
Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires; it is a major product of pyrolysis of conifer trees. It is also present in effluents from wood pulp and paper mills.
Retene, together with cadalene, simonellite and ip-iHMN, is a biomarker of vascular plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere.
Health effects
A recent study has shown retene, which is a component of the Amazonian organic PM10, is cytotoxic to human lung cells.
Document 4:::
The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs).
The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1]
Types of organism by carbon source
Heterotrophs
Autotrophs
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Coal is a solid hydrocarbon formed from what type of decaying material?
A. insects
B. plant
C. soil
D. mammals
Answer:
|
|
sciq-7970
|
multiple_choice
|
A phospholipid is a bond between the glycerol component of a lipid and what else?
|
[
"chlorine molecule",
"elemental molecule",
"oxygen molecule",
"phosphorous molecule"
] |
D
|
Relavent Documents:
Document 0:::
Glycerophospholipids or phosphoglycerides are glycerol-based phospholipids. They are the main component of biological membranes. Two major classes are known: those for bacteria and eukaryotes and a separate family for archaea.
Structures
The term glycerophospholipid signifies any derivative of glycerophosphoric acid that contains at least one O-acyl, or O-alkyl, or O-alk-1'-enyl residue attached to the glycerol moiety. The phosphate group forms an ester linkage to the glycerol. The long-chained hydrocarbons are typically attached through ester linkages in bacteria/eukaryotes and by ether linkages in archaea. In bacteria and procaryotes, the lipids consist of diesters commonly of C16 or C18 fatty acids. These acids are straight-chained and, especially for the C18 members, can be unsaturated. For archaea, the hydrocarbon chains have chain lengths of C10, C15, C20 etc. since they are derived from isoprene units. These chains are branched, with one methyl substituent per C5 subunit. These chains are linked to the glycerol phosphate by ether linkages.
The two hydrocarbon chains attached to the glycerol are hydrophobic while the polar head, which mainly consists of the phosphate group attached to the third carbon of the glycerol backbone, is hydrophilic. This dual characteristic leads to the amphipathic nature of glycerophospholipids.
They are usually organized into a bilayer in membranes with the polar hydrophilic heads sticking outwards to the aqueous environment and the non-polar hydrophobic tails pointing inwards. Glycerophospholipids consist of various diverse species which usually differ slightly in structure. The most basic structure is a phosphatidate. This species is an important intermediate in the synthesis of many phosphoglycerides. The presence of an additional group attached to the phosphate allows for many different phosphoglycerides.
By convention, structures of these compounds show the 3 glycerol carbon atoms vertically with the phosphate att
Document 1:::
Phosphatidylethanolamine (PE) is a class of phospholipids found in biological membranes. They are synthesized by the addition of cytidine diphosphate-ethanolamine to diglycerides, releasing cytidine monophosphate. S-Adenosyl methionine can subsequently methylate the amine of phosphatidylethanolamines to yield phosphatidylcholines.
Function
In cells
Phosphatidylethanolamines are found in all living cells, composing 25% of all phospholipids. In human physiology, they are found particularly in nervous tissue such as the white matter of brain, nerves, neural tissue, and in spinal cord, where they make up 45% of all phospholipids.
Phosphatidylethanolamines play a role in membrane fusion and in disassembly of the contractile ring during cytokinesis in cell division. Additionally, it is thought that phosphatidylethanolamine regulates membrane curvature. Phosphatidylethanolamine is an important precursor, substrate, or donor in several biological pathways.
As a polar head group, phosphatidylethanolamine creates a more viscous lipid membrane compared to phosphatidylcholine. For example, the melting temperature of di-oleoyl-phosphatidylethanolamine is -16 °C while the melting temperature of di-oleoyl-phosphatidylcholine is -20 °C. If the lipids had two palmitoyl chains, phosphatidylethanolamine would melt at 63 °C while phosphatidylcholine would melt already at 41 °C. Lower melting temperatures correspond, in a simplistic view, to more fluid membranes.
Document 2:::
The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome.
The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer.
See also
Genome
Proteome
Glycome
Document 3:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 4:::
Sphingolipids are a class of lipids containing a backbone of sphingoid bases, which are a set of aliphatic amino alcohols that includes sphingosine. They were discovered in brain extracts in the 1870s and were named after the mythological sphinx because of their enigmatic nature. These compounds play important roles in signal transduction and cell recognition. Sphingolipidoses, or disorders of sphingolipid metabolism, have particular impact on neural tissue. A sphingolipid with a terminal hydroxyl group is a ceramide. Other common groups bonded to the terminal oxygen atom include phosphocholine, yielding a sphingomyelin, and various sugar monomers or dimers, yielding cerebrosides and globosides, respectively. Cerebrosides and globosides are collectively known as glycosphingolipids.
Structure
The long-chain bases, sometimes simply known as sphingoid bases, are the first non-transient products of de novo sphingolipid synthesis in both yeast and mammals. These compounds, specifically known as phytosphingosine and dihydrosphingosine (also known as sphinganine, although this term is less common), are mainly C18 compounds, with somewhat lower levels of C20 bases. Ceramides and glycosphingolipids are N-acyl derivatives of these compounds.
The sphingosine backbone is O-linked to a (usually) charged head group such as ethanolamine, serine, or choline.
The backbone is also amide-linked to an acyl group, such as a fatty acid.
Types
Simple sphingolipids, which include the sphingoid bases and ceramides, make up the early products of the sphingolipid synthetic pathways.
Sphingoid bases are the fundamental building blocks of all sphingolipids. The main mammalian sphingoid bases are dihydrosphingosine and sphingosine, while dihydrosphingosine and phytosphingosine are the principal sphingoid bases in yeast. Sphingosine, dihydrosphingosine, and phytosphingosine may be phosphorylated.
Ceramides, as a general class, are N-acylated sphingoid bases lacking additional head groups.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A phospholipid is a bond between the glycerol component of a lipid and what else?
A. chlorine molecule
B. elemental molecule
C. oxygen molecule
D. phosphorous molecule
Answer:
|
|
sciq-7986
|
multiple_choice
|
What is the closed circulatory system of humans and other vertebrates called?
|
[
"reproductive system",
"cardiovascular system",
"digestive system",
"respiratory system"
] |
B
|
Relavent Documents:
Document 0:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
Document 1:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 2:::
Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems.
The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems.
Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained.
Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem.
Document 3:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 4:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the closed circulatory system of humans and other vertebrates called?
A. reproductive system
B. cardiovascular system
C. digestive system
D. respiratory system
Answer:
|
|
sciq-2500
|
multiple_choice
|
Enzymes are a type of what, and as such, they are not reactants in the reactions they control?
|
[
"hormone",
"catalyst",
"metabolite",
"neurotransmitter"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 1:::
An enzyme inhibitor is a molecule that binds to an enzyme and blocks its activity. Enzymes are proteins that speed up chemical reactions necessary for life, in which substrate molecules are converted into products. An enzyme facilitates a specific chemical reaction by binding the substrate to its active site, a specialized area on the enzyme that accelerates the most difficult step of the reaction.
An enzyme inhibitor stops ("inhibits") this process, either by binding to the enzyme's active site (thus preventing the substrate itself from binding) or by binding to another site on the enzyme such that the enzyme's catalysis of the reaction is blocked. Enzyme inhibitors may bind reversibly or irreversibly. Irreversible inhibitors form a chemical bond with the enzyme such that the enzyme is inhibited until the chemical bond is broken. By contrast, reversible inhibitors bind non-covalently and may spontaneously leave the enzyme, allowing the enzyme to resume its function. Reversible inhibitors produce different types of inhibition depending on whether they bind to the enzyme, the enzyme-substrate complex, or both.
Enzyme inhibitors play an important role in all cells, since they are generally specific to one enzyme each and serve to control that enzyme's activity. For example, enzymes in a metabolic pathway may be inhibited by molecules produced later in the pathway, thus curtailing the production of molecules that are no longer needed. This type of negative feedback is an important way to maintain balance in a cell. Enzyme inhibitors also control essential enzymes such as proteases or nucleases that, if left unchecked, may damage a cell. Many poisons produced by animals or plants are enzyme inhibitors that block the activity of crucial enzymes in prey or predators.
Many drug molecules are enzyme inhibitors that inhibit an aberrant human enzyme or an enzyme critical for the survival of a pathogen such as a virus, bacterium or parasite. Examples include methotrexate (u
Document 2:::
Enzyme induction is a process in which a molecule (e.g. a drug) induces (i.e. initiates or enhances) the expression of an enzyme.
Enzyme inhibition can refer to
the inhibition of the expression of the enzyme by another molecule
interference at the enzyme-level, basically with how the enzyme works. This can be competitive inhibition, uncompetitive inhibition, non-competitive inhibition or partially competitive inhibition.
If the molecule induces enzymes that are responsible for its own metabolism, this is called auto-induction (or auto-inhibition if there is inhibition). These processes are particular forms of gene expression regulation.
These terms are of particular interest to pharmacology, and more specifically to drug metabolism and drug interactions. They also apply to molecular biology.
History
In the late 1950s and early 1960s, the French molecular biologists François Jacob and Jacques Monod became the first to explain enzyme induction, in the context of the lac operon of Escherichia coli. In the absence of lactose, the constitutively expressed lac repressor protein binds to the operator region of the DNA and prevents the transcription of the operon genes. When present, lactose binds to the lac repressor, causing it to separate from the DNA and thereby enabling transcription to occur. Monod and Jacob generated this theory following 15 years of work by them and others (including Joshua Lederberg), partially as an explanation for Monod's observation of diauxie. Previously, Monod had hypothesized that enzymes could physically adapt themselves to new substrates; a series of experiments by him, Jacob, and Arthur Pardee eventually demonstrated this to be incorrect and led them to the modern theory, for which he and Jacob shared the 1965 Nobel Prize in Physiology or Medicine (together with André Lwoff).
Aryl hydrocarbon receptor
Potency
Index inducer or just inducer predictably induce metabolism via a given pathway and are commonly used in prospective clini
Document 3:::
In chemistry, a reagent ( ) or analytical reagent is a substance or compound added to a system to cause a chemical reaction, or test if one occurs. The terms reactant and reagent are often used interchangeably, but reactant specifies a substance consumed in the course of a chemical reaction. Solvents, though involved in the reaction mechanism, are usually not called reactants. Similarly, catalysts are not consumed by the reaction, so they are not reactants. In biochemistry, especially in connection with enzyme-catalyzed reactions, the reactants are commonly called substrates.
Definitions
Organic chemistry
In organic chemistry, the term "reagent" denotes a chemical ingredient (a compound or mixture, typically of inorganic or small organic molecules) introduced to cause the desired transformation of an organic substance. Examples include the Collins reagent, Fenton's reagent, and Grignard reagents.
Analytical chemistry
In analytical chemistry, a reagent is a compound or mixture used to detect the presence or absence of another substance, e.g. by a color change, or to measure the concentration of a substance, e.g. by colorimetry. Examples include Fehling's reagent, Millon's reagent, and Tollens' reagent.
Commercial or laboratory preparations
In commercial or laboratory preparations, reagent-grade designates chemical substances meeting standards of purity that ensure the scientific precision and reliability of chemical analysis, chemical reactions or physical testing. Purity standards for reagents are set by organizations such as ASTM International or the American Chemical Society. For instance, reagent-quality water must have very low levels of impurities such as sodium and chloride ions, silica, and bacteria, as well as a very high electrical resistivity. Laboratory products which are less pure, but still useful and economical for undemanding work, may be designated as technical, practical, or crude grade to distinguish them from reagent versions.
Biology
In t
Document 4:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Enzymes are a type of what, and as such, they are not reactants in the reactions they control?
A. hormone
B. catalyst
C. metabolite
D. neurotransmitter
Answer:
|
|
sciq-8710
|
multiple_choice
|
What gland is located in the neck, where it wraps around the trachea?
|
[
"pituitary gland",
"thyroid gland",
"salivary gland",
"adrenal gland"
] |
B
|
Relavent Documents:
Document 0:::
The lymph glands of the thorax may be divided into parietal and visceral — the former being situated in the thoracic wall, the latter in relation to the viscera.
Document 1:::
The mediastinal branches are numerous small vessels which supply the lymph glands and loose areolar tissue in the posterior mediastinum.
Document 2:::
Anatomists use the term triangles of the neck to describe the divisions created by the major muscles in the region.
The side of the neck presents a somewhat quadrilateral outline, limited, above, by the lower border of the body of the mandible, and an imaginary line extending from the angle of the mandible to the mastoid process; below, by the upper border of the clavicle; in front, by the middle line of the neck; behind, by the anterior margin of the trapezius.
This space is subdivided into two large triangles by sternocleidomastoid, which passes obliquely across the neck, from the sternum and clavicle below, to the mastoid process and occipital bone above.
The triangular space in front of this muscle is called the anterior triangle of the neck; and that behind it, the posterior triangle of the neck.
The anterior triangle is further divided into muscular, carotid, submandibular and submental and the posterior into occipital and subclavian triangles.
Clinical relevance
The use of the divisions described as the triangles of the neck permit the effective communication of the location of palpable masses located in the neck between healthcare professionals.
The common swellings anterior of the midline are:
Enlarged submental lymph nodes and sublingual dermoid in the submental region.
Thyroglossal cyst and inflamed subhyoid bursa just below the hyoid bone.
Goitre, carcinoma of larynx and enlarged lymph nodes in the suprasternal region.
Additional images
Document 3:::
Zuckerkandl's tubercle is a pyramidal extension of the thyroid gland, present at the most posterior side of each lobe. Emil Zuckerkandl described it in 1902 as the processus posterior glandulae thyreoideae. Although the structure is named after Zuckerkandl, it was discovered first by Otto Madelung in 1867 as the posterior horn of the thyroid. The structure is important in thyroid surgery as it is closely related to the recurrent laryngeal nerve, the inferior thyroid artery, Berry's ligament and the parathyroid glands. The structure is subject to an important amount of anatomic variation, and therefore a size classification is proposed by Pelizzo et al.
Document 4:::
The tubarial salivary glands, also known as the tubarial glands, are a pair of salivary glands found in humans between the nasal cavity and throat.
Description
The tubarial glands are found in the lateral walls of the nasopharynx overlying the torus tubarius. The tubarial salivary glands bind to PSMA, which is how they were discovered.
History
The glands were discovered by a group of Dutch scientists at the Netherlands Cancer Institute in September 2020 using PET/CT scans.
Significance
Most of the significance of the tubarial glands stems from their significance in radiotherapy. It is believed that avoiding the irradiation of the glands will prevent many of the side effects of radiotherapy, such as xerostomia.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What gland is located in the neck, where it wraps around the trachea?
A. pituitary gland
B. thyroid gland
C. salivary gland
D. adrenal gland
Answer:
|
|
sciq-2921
|
multiple_choice
|
Seed plants that produce seeds in the ovaries of their flowers are known as what?
|
[
"conifers",
"spores",
"gymnosperms",
"angiosperms"
] |
D
|
Relavent Documents:
Document 0:::
The gymnosperms ( lit. revealed seeds) are a group of seed-producing plants that includes conifers, cycads, Ginkgo, and gnetophytes, forming the clade Gymnospermae. The term gymnosperm comes from the composite word in ( and ), literally meaning 'naked seeds'. The name is based on the unenclosed condition of their seeds (called ovules in their unfertilized state). The non-encased condition of their seeds contrasts with the seeds and ovules of flowering plants (angiosperms), which are enclosed within an ovary. Gymnosperm seeds develop either on the surface of scales or leaves, which are often modified to form cones, or on their own as in yew, Torreya, Ginkgo. Gymnosperm lifecycles involve alternation of generations. They have a dominant diploid sporophyte phase and a reduced haploid gametophyte phase which is dependent on the sporophytic phase. The term "gymnosperm" is often used in paleobotany to refer to (the paraphyletic group of) all non-angiosperm seed plants. In that case, to specify the modern monophyletic group of gymnosperms, the term Acrogymnospermae is sometimes used.
The gymnosperms and angiosperms together comprise the spermatophytes or seed plants. The gymnosperms are subdivided into five Divisions, four of which, the Cycadophyta, Ginkgophyta, Gnetophyta, and Pinophyta (also known as Coniferophyta) are still in existence while the Pteridospermatophyta are now extinct. Newer classification place the gnetophytes among the conifers.
By far the largest group of living gymnosperms are the conifers (pines, cypresses, and relatives), followed by cycads, gnetophytes (Gnetum, Ephedra and Welwitschia), and Ginkgo biloba (a single living species). About 65% of gymnosperms are dioecious, but conifers are almost all monoecious.
Document 1:::
In the flowering plants, an ovary is a part of the female reproductive organ of the flower or gynoecium. Specifically, it is the part of the pistil which holds the ovule(s) and is located above or below or at the point of connection with the base of the petals and sepals. The pistil may be made up of one carpel or of several fused carpels (e.g. dicarpel or tricarpel), and therefore the ovary can contain part of one carpel or parts of several fused carpels. Above the ovary is the style and the stigma, which is where the pollen lands and germinates to grow down through the style to the ovary, and, for each individual pollen grain, to fertilize one individual ovule. Some wind pollinated flowers have much reduced and modified ovaries.
Fruits
A fruit is the mature, ripened ovary of a flower following double fertilization in an angiosperm. Because gymnosperms do not have an ovary but reproduce through double fertilization of unprotected ovules, they produce naked seeds that do not have a surrounding fruit, this meaning that juniper and yew "berries" are not fruits, but modified cones. Fruits are responsible for the dispersal and protection of seeds in angiosperms and cannot be easily characterized due to the differences in defining culinary and botanical fruits.
Development
After double fertilization and ripening, the ovary becomes the fruit, the ovules inside the ovary become the seeds of that fruit, and the egg within the ovule becomes the zygote. Double fertilization of the central cell in the ovule produces the nutritious endosperm tissue that surrounds the developing zygote within the seed. Angiosperm ovaries do not always produce a fruit after the ovary has been fertilized. Problems that can arise during the developmental process of the fruit include genetic issues, harsh environmental conditions, and insufficient energy which may be caused by competition for resources between ovaries; any of these situations may prevent maturation of the ovary.
Dispersal a
Document 2:::
The fossil history of flowering plants records the development of flowers and other distinctive structures of the angiosperms, now the dominant group of plants on land. The history is controversial as flowering plants appear in great diversity in the Cretaceous, with scanty and debatable records before that, creating a puzzle for evolutionary biologists that Charles Darwin named an "abominable mystery".
Paleozoic
Fossilised spores suggest that land plants (embryophytes) have existed for at least 475 million years. Early land plants reproduced sexually with flagellated, swimming sperm, like the green algae from which they evolved. An adaptation to terrestrial life was the development of upright sporangia for dispersal by spores to new habitats. This feature is lacking in the descendants of their nearest algal relatives, the Charophycean green algae. A later terrestrial adaptation took place with retention of the delicate, avascular sexual stage, the gametophyte, within the tissues of the vascular sporophyte. This occurred by spore germination within sporangia rather than spore release, as in non-seed plants. A current example of how this might have happened can be seen in the precocious spore germination in Selaginella, the spike-moss. The result for the ancestors of angiosperms and gymnosperms was enclosing the female gamete in a case, the seed.
The first seed-bearing plants were gymnosperms, like the ginkgo, and conifers (such as pines and firs). These did not produce flowers. The pollen grains (male gametophytes) of Ginkgo and cycads produce a pair of flagellated, mobile sperm cells that "swim" down the developing pollen tube to the female and her eggs.
Angiosperms appear suddenly and in great diversity in the fossil record in the Early Cretaceous. This poses such a problem for the theory of gradual evolution that Charles Darwin called it an "abominable mystery". Several groups of extinct gymnosperms, in particular seed ferns, have been proposed as the ancest
Document 3:::
A seedling is a young sporophyte developing out of a plant embryo from a seed. Seedling development starts with germination of the seed. A typical young seedling consists of three main parts: the radicle (embryonic root), the hypocotyl (embryonic shoot), and the cotyledons (seed leaves). The two classes of flowering plants (angiosperms) are distinguished by their numbers of seed leaves: monocotyledons (monocots) have one blade-shaped cotyledon, whereas dicotyledons (dicots) possess two round cotyledons. Gymnosperms are more varied. For example, pine seedlings have up to eight cotyledons. The seedlings of some flowering plants have no cotyledons at all. These are said to be acotyledons.
The plumule is the part of a seed embryo that develops into the shoot bearing the first true leaves of a plant. In most seeds, for example the sunflower, the plumule is a small conical structure without any leaf structure. Growth of the plumule does not occur until the cotyledons have grown above ground. This is epigeal germination. However, in seeds such as the broad bean, a leaf structure is visible on the plumule in the seed. These seeds develop by the plumule growing up through the soil with the cotyledons remaining below the surface. This is known as hypogeal germination.
Photomorphogenesis and etiolation
Dicot seedlings grown in the light develop short hypocotyls and open cotyledons exposing the epicotyl. This is also referred to as photomorphogenesis. In contrast, seedlings grown in the dark develop long hypocotyls and their cotyledons remain closed around the epicotyl in an apical hook. This is referred to as skotomorphogenesis or etiolation. Etiolated seedlings are yellowish in color as chlorophyll synthesis and chloroplast development depend on light. They will open their cotyledons and turn green when treated with light.
In a natural situation, seedling development starts with skotomorphogenesis while the seedling is growing through the soil and attempting to reach the
Document 4:::
Megaspores, also called macrospores, are a type of spore that is present in heterosporous plants. These plants have two spore types, megaspores and microspores. Generally speaking, the megaspore, or large spore, germinates into a female gametophyte, which produces egg cells. These are fertilized by sperm produced by the male gametophyte developing from the microspore. Heterosporous plants include seed plants (gymnosperms and flowering plants), water ferns (Salviniales), spikemosses (Selaginellaceae) and quillworts (Isoetaceae).
Megasporogenesis
In gymnosperms and flowering plants, the megaspore is produced inside the nucellus of the ovule. During megasporogenesis, a diploid precursor cell, the megasporocyte or megaspore mother cell, undergoes meiosis to produce initially four haploid cells (the megaspores). Angiosperms exhibit three patterns of megasporogenesis: monosporic, bisporic, and tetrasporic, also known as the Polygonum type, the Alisma type, and the Drusa type, respectively. The monosporic pattern occurs most frequently (>70% of angiosperms) and is found in many economically and biologically important groups such as Brassicaceae (e.g., Arabidopsis, Capsella, Brassica), Gramineae (e.g., maize, rice, wheat), Malvaceae (e.g., cotton), Leguminoseae (e.g., beans, soybean), and Solanaceae (e.g., pepper, tobacco, tomato, potato, petunia).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Seed plants that produce seeds in the ovaries of their flowers are known as what?
A. conifers
B. spores
C. gymnosperms
D. angiosperms
Answer:
|
|
ai2_arc-96
|
multiple_choice
|
The following mathematical expressions represent four different concentrations of a chemical solution to be used in a science experiment. Which one is equal in magnitude to 1/1000?
|
[
"1.0 x 10^3",
"1.0 x 10^4",
"1.0 x 10^-3",
"1.0 x 10^-4"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In chemistry, the mass fraction of a substance within a mixture is the ratio (alternatively denoted ) of the mass of that substance to the total mass of the mixture. Expressed as a formula, the mass fraction is:
Because the individual masses of the ingredients of a mixture sum to , their mass fractions sum to unity:
Mass fraction can also be expressed, with a denominator of 100, as percentage by mass (in commercial contexts often called percentage by weight, abbreviated wt.% or % w/w; see mass versus weight). It is one way of expressing the composition of a mixture in a dimensionless size; mole fraction (percentage by moles, mol%) and volume fraction (percentage by volume, vol%) are others.
When the prevalences of interest are those of individual chemical elements, rather than of compounds or other substances, the term mass fraction can also refer to the ratio of the mass of an element to the total mass of a sample. In these contexts an alternative term is mass percent composition. The mass fraction of an element in a compound can be calculated from the compound's empirical formula or its chemical formula.
Terminology
Percent concentration does not refer to this quantity. This improper name persists, especially in elementary textbooks. In biology, the unit "%" is sometimes (incorrectly) used to denote mass concentration, also called mass/volume percentage. A solution with 1g of solute dissolved in a final volume of 100mL of solution would be labeled as "1%" or "1% m/v" (mass/volume). This is incorrect because the unit "%" can only be used for dimensionless quantities. Instead, the concentration should simply be given in units of g/mL. Percent solution or percentage solution are thus terms best reserved for mass percent solutions (m/m, m%, or mass solute/mass total solution after mixing), or volume percent solutions (v/v, v%, or volume solute per volume of total solution after mixing). The very ambiguous terms percent solution and percentage solutions
Document 2:::
In chemistry and biology, the dilution ratio and dilution factor are two related (but slightly different) expressions of the change in concentration of a liquid substance when mixing it with another liquid substance. They are often used for simple dilutions, one in which a unit volume of a liquid material of interest is combined with an appropriate volume of a solvent liquid to achieve the desired concentration. The diluted material must be thoroughly mixed to achieve the true dilution.
For example, in a solution with a 1:5 dilution ratio, entails combining 1 unit volume of solute (the material to be diluted) with 5 unit volumes of the solvent to give 6 total units of total volume.
In photographic development, dilutions are normally given in a '1+x' format. For example '1+49' would typically mean 1 part concentrate and 49 parts water, meaning a 500ml solution would require 10ml concentrate and 490ml water.
Dilution factor
The "dilution factor" is an expression which describes the ratio of the aliquot volume to the final volume. Dilution factor is a notation often used in commercial assays. For example, in solution with a 1/5 dilution factor (which may be abbreviated as x5 dilution), entails combining 1 unit volume of solute (the material to be diluted) with (approximately) 4 unit volumes of the solvent to give 5 units of total volume. The following formulas can be used to calculate the volumes of solute () and solvent () to be used:
where is the desired total volume, and is the desired dilution factor number (the number in the position of if expressed as " dilution factor" or " dilution").
However, some solutions and mixtures take up slightly less volume than their components.
In other areas of science such as pharmacy, and in non-scientific usage, a dilution is normally given as a plain ratio of solvent to solute. For large factors, this confusion makes only a minor difference, but in precise work it can be important to make clear whether dilution ratio or
Document 3:::
In physical chemistry, there are numerous quantities associated with chemical compounds and reactions; notably in terms of amounts of substance, activity or concentration of a substance, and the rate of reaction. This article uses SI units.
Introduction
Theoretical chemistry requires quantities from core physics, such as time, volume, temperature, and pressure. But the highly quantitative nature of physical chemistry, in a more specialized way than core physics, uses molar amounts of substance rather than simply counting numbers; this leads to the specialized definitions in this article. Core physics itself rarely uses the mole, except in areas overlapping thermodynamics and chemistry.
Notes on nomenclature
Entity refers to the type of particle/s in question, such as atoms, molecules, complexes, radicals, ions, electrons etc.
Conventionally for concentrations and activities, square brackets [ ] are used around the chemical molecular formula. For an arbitrary atom, generic letters in upright non-bold typeface such as A, B, R, X or Y etc. are often used.
No standard symbols are used for the following quantities, as specifically applied to a substance:
the mass of a substance m,
the number of moles of the substance n,
partial pressure of a gas in a gaseous mixture p (or P),
some form of energy of a substance (for chemistry enthalpy H is common),
entropy of a substance S
the electronegativity of an atom or chemical bond χ.
Usually the symbol for the quantity with a subscript of some reference to the quantity is used, or the quantity is written with the reference to the chemical in round brackets. For example, the mass of water might be written in subscripts as mH2O, mwater, maq, mw (if clear from context) etc., or simply as m(H2O). Another example could be the electronegativity of the fluorine-fluorine covalent bond, which might be written with subscripts χF-F, χFF or χF-F etc., or brackets χ(F-F), χ(FF) etc.
Neither is standard. For the purpose of this a
Document 4:::
The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests.
Events
There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science.
Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier.
The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version.
It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers.
General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy.
Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The following mathematical expressions represent four different concentrations of a chemical solution to be used in a science experiment. Which one is equal in magnitude to 1/1000?
A. 1.0 x 10^3
B. 1.0 x 10^4
C. 1.0 x 10^-3
D. 1.0 x 10^-4
Answer:
|
|
sciq-7825
|
multiple_choice
|
What kind of change causes evolution in population?
|
[
"allele frequency changes",
"transgenic frequency changes",
"paragenic frequency changes",
"morphogenesis frequency changes"
] |
A
|
Relavent Documents:
Document 0:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 1:::
Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment.
Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy.
Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex.
Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances.
For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term.
Document 2:::
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce
Document 3:::
Evolution & Development is a peer-reviewed scientific journal publishing material at the interface of evolutionary and developmental biology. Within evolutionary developmental biology, it has the aim of aiding a broader synthesis of biological thought in these two areas. Its scope ranges from paleontology and population biology, to developmental and molecular biology, including mathematics and the history and philosophy of science.
It was established in 1999 by five biologists: Wallace Arthur, Sean B. Carroll, Michael Coates, Rudolf Raff, and Gregory Wray. It is published by Wiley-Blackwell on behalf of the Society for Integrative and Comparative Biology.
Document 4:::
Evolutionary invasion analysis, also known as adaptive dynamics, is a set of mathematical modeling techniques that use differential equations to study the long-term evolution of traits in asexually and sexually reproducing populations. It rests on the following three assumptions about mutation and population dynamics:
Mutations are infrequent. The population can be assumed to be at equilibrium when a new mutant arises.
The number of individuals with the mutant trait is initially negligible in the large, established resident population.
Mutant phenotypes are only slightly different from the resident phenotype.
Evolutionary invasion analysis makes it possible to identify conditions on model parameters for which the mutant population dies out, replaces the resident population, and/or coexists with the resident population. Long-term coexistence of the two phenotypes is known as evolutionary branching. When branching occurs, the mutant establishes itself as a second resident in the environment.
Central to evolutionary invasion analysis is the mutant's invasion fitness. This is a mathematical expression for the long-term exponential growth rate of the mutant subpopulation when it is introduced into the resident population in small numbers. If the invasion fitness is positive (in continuous time), the mutant population can grow in the environment set by the resident phenotype. If the invasion fitness is negative, the mutant population swiftly goes extinct.
Introduction and background
The basic principle of evolution via natural selection was outlined by Charles Darwin in his 1859 book, On the Origin of Species. Though controversial at the time, the central ideas remain largely unchanged to this date, even though much more is now known about the biological basis of inheritance. Darwin expressed his arguments verbally, but many attempts have since then been made to formalise the theory of evolution. The best known are population genetics which models inheritance at
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of change causes evolution in population?
A. allele frequency changes
B. transgenic frequency changes
C. paragenic frequency changes
D. morphogenesis frequency changes
Answer:
|
|
sciq-4668
|
multiple_choice
|
Photoautotrophs including (a) plants, (b) algae, and (c) cyanobacteria synthesize their organic compounds via photosynthesis using sunlight as this?
|
[
"heating source",
"fuel source",
"light source4",
"energy source"
] |
D
|
Relavent Documents:
Document 0:::
In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not.
Overview
Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom):
+ H2O + light → CH2O + O2
+ O2 + 4 H2S → CH2O + 4 S + 3 H2O
In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth'
Document 1:::
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779.
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water.
Origin
Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w
Document 2:::
The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction
6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2
where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence.
Typical efficiencies
Plants
Quoted values sunlight-to-biomass efficien
Document 3:::
The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs).
The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1]
Types of organism by carbon source
Heterotrophs
Autotrophs
Document 4:::
Phototrophs () are organisms that carry out photon capture to produce complex organic compounds (e.g. carbohydrates) and acquire energy. They use the energy from light to carry out various cellular metabolic processes. It is a common misconception that phototrophs are obligatorily photosynthetic. Many, but not all, phototrophs often photosynthesize: they anabolically convert carbon dioxide into organic material to be utilized structurally, functionally, or as a source for later catabolic processes (e.g. in the form of starches, sugars and fats). All phototrophs either use electron transport chains or direct proton pumping to establish an electrochemical gradient which is utilized by ATP synthase, to provide the molecular energy currency for the cell. Phototrophs can be either autotrophs or heterotrophs. If their electron and hydrogen donors are inorganic compounds (e.g., , as in some purple sulfur bacteria, or , as in some green sulfur bacteria) they can be also called lithotrophs, and so, some photoautotrophs are also called photolithoautotrophs. Examples of phototroph organisms are Rhodobacter capsulatus, Chromatium, and Chlorobium.
History
Originally used with a different meaning, the term took its current definition after Lwoff and collaborators (1946).
Photoautotroph
Most of the well-recognized phototrophs are autotrophic, also known as photoautotrophs, and can fix carbon. They can be contrasted with chemotrophs that obtain their energy by the oxidation of electron donors in their environments. Photoautotrophs are capable of synthesizing their own food from inorganic substances using light as an energy source. Green plants and photosynthetic bacteria are photoautotrophs. Photoautotrophic organisms are sometimes referred to as holophytic.
Oxygenic photosynthetic organisms use chlorophyll for light-energy capture and oxidize water, "splitting" it into molecular oxygen.
Ecology
In an ecological context, phototrophs are often the food source for neighboring he
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Photoautotrophs including (a) plants, (b) algae, and (c) cyanobacteria synthesize their organic compounds via photosynthesis using sunlight as this?
A. heating source
B. fuel source
C. light source4
D. energy source
Answer:
|
|
ai2_arc-1045
|
multiple_choice
|
Laundry detergents were once manufactured to contain high concentrations of phosphorus compounds. When waste water containing these compounds ran off into lakes, the phosphorous became a nutrient to algae. As algae populations increased in the lakes, succession accelerated. Over a long time, which would a lake become as a result of the phosphorous in the detergent?
|
[
"canyon",
"desert",
"swamp",
"river"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself.
Lake ecosystem
Geography
The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments.
Eutrophication experiment
To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Laundry detergents were once manufactured to contain high concentrations of phosphorus compounds. When waste water containing these compounds ran off into lakes, the phosphorous became a nutrient to algae. As algae populations increased in the lakes, succession accelerated. Over a long time, which would a lake become as a result of the phosphorous in the detergent?
A. canyon
B. desert
C. swamp
D. river
Answer:
|
|
sciq-3774
|
multiple_choice
|
What does the moving piston in an engine turn?
|
[
"crankshaft",
"muffler",
"brake",
"hammer"
] |
A
|
Relavent Documents:
Document 0:::
The reciprocating motion of a non-offset piston connected to a rotating crank through a connecting rod (as would be found in internal combustion engines) can be expressed by equations of motion. This article shows how these equations of motion can be derived using calculus as functions of angle (angle domain) and of time (time domain).
Crankshaft geometry
The geometry of the system consisting of the piston, rod and crank is represented as shown in the following diagram:
Definitions
From the geometry shown in the diagram above, the following variables are defined:
rod length (distance between piston pin and crank pin)
crank radius (distance between crank center and crank pin, i.e. half stroke)
crank angle (from cylinder bore centerline at TDC)
piston pin position (distance upward from crank center along cylinder bore centerline)
The following variables are also defined:
piston pin velocity (upward from crank center along cylinder bore centerline)
piston pin acceleration (upward from crank center along cylinder bore centerline)
crank angular velocity (in the same direction/sense as crank angle )
Angular velocity
The frequency (Hz) of the crankshaft's rotation is related to the engine's speed (revolutions per minute) as follows:
So the angular velocity (radians/s) of the crankshaft is:
Triangle relation
As shown in the diagram, the crank pin, crank center and piston pin form triangle NOP.
By the cosine law it is seen that:
where and are constant and varies as changes.
Equations with respect to angular position (Angle Domain)
Angle domain equations are expressed as functions of angle.
Deriving angle domain equations
The angle domain equations of the piston's reciprocating motion are derived from the system's geometry equations as follows.
Position
Position with respect to crank angle (from the triangle relation, completing the square, utilizing the Pythagorean identity, and rearranging):
Velocity
Velocity with respect to crank angle (take
Document 1:::
In a reciprocating piston engine, the stroke ratio, defined by either bore/stroke ratio or stroke/bore ratio, is a term to describe the ratio between cylinder bore diameter and piston stroke length. This can be used for either an internal combustion engine, where the fuel is burned within the cylinders of the engine, or external combustion engine, such as a steam engine, where the combustion of the fuel takes place outside the working cylinders of the engine.
A fairly comprehensive yet understandable study of stroke/bore effects was published in Horseless Age, 1916.
Conventions
In a piston engine, there are two different ways of describing the stroke ratio of its cylinders, namely: bore/stroke ratio, and stroke/bore ratio.
Bore/stroke ratio
Bore/stroke is the more commonly used term, with usage in North America, Europe, United Kingdom, Asia, and Australia.
The diameter of the cylinder bore is divided by the length of the piston stroke to give the ratio.
Square, oversquare and undersquare engines
The following terms describe the naming conventions for the configurations of the various bore/stroke ratio:
Square engine
A square engine has equal bore and stroke dimensions, giving a bore/stroke value of exactly 1:1.
Square engine examples
1953 – Ferrari 250 Europa had Lampredi V12 with bore and stroke.
1967 – FIAT 125, 124Sport engine 125A000-90 hp, 125B000-100 hp, 125BC000-110 hp, 1608 ccm, DOHC, bore and stroke.
1970 – Ford 400 had a bore and stroke.
1973 – Kawasaki Z1 and KZ(Z)900 had a bore and stroke.
1973 – British Leyland's Australian division created a 4.4-litre version of the Rover V8 engine, with bore and stroke both measuring 88.9 mm. This engine was exclusively used in the Leyland P76.
1982 - Honda Nighthawk 250 and Honda CMX250C Rebel have a bore and stroke, making it a square engine.
1983 – Mazda FE 2.0L inline four-cylinder engine with a perfectly squared bore and stroke. This engine also features the ideal 1.75:1 rod/stroke ratio.
1
Document 2:::
This is an alphabetical list of articles pertaining specifically to mechanical engineering. For a broad overview of engineering, please see List of engineering topics. For biographies please see List of engineers.
A
Acceleration –
Accuracy and precision –
Actual mechanical advantage –
Aerodynamics –
Agitator (device) –
Air handler –
Air conditioner –
Air preheater –
Allowance –
American Machinists' Handbook –
American Society of Mechanical Engineers –
Ampere –
Applied mechanics –
Antifriction –
Archimedes' screw –
Artificial intelligence –
Automaton clock –
Automobile –
Automotive engineering –
Axle –
Air Compressor
B
Backlash –
Balancing –
Beale Number –
Bearing –
Belt (mechanical) –
Bending –
Biomechatronics –
Bogie –
Brittle –
Buckling –
Bus--
Bushing –
Boilers & boiler systems
BIW--
C
CAD –
CAM –
CAID –
Calculator –
Calculus –
Car handling –
Carbon fiber –
Classical mechanics –
Clean room design –
Clock –
Clutch –
CNC –
Coefficient of thermal expansion –
Coil spring –
Combustion –
Composite material –
Compression ratio –
Compressive strength –
Computational fluid dynamics –
Computer –
Computer-aided design –
Computer-aided industrial design –
Computer-numerically controlled –
Conservation of mass –
Constant-velocity joint –
Constraint –
Continuum mechanics –
Control theory –
Corrosion –
Cotter pin –
Crankshaft –
Cybernetics –
D
Damping ratio –
Deformation (engineering) –
Delamination –
Design –
Diesel Engine –
Differential –
Dimensionless number –
Diode –
Diode laser –
Drafting –
Drifting –
Driveshaft –
Dynamics –
Design for Manufacturability for CNC machining –
E
Elasticity –
Elasticity tensor -
Electric motor –
Electrical engineering –
Electrical circuit –
Electrical network –
Electromagnetism –
Electronic circuit –
Electronics –
Energy –
Engine –
Engineering –
Engineering cybernetics –
Engineering drawing –
Engineering economics –
Engineering ethics –
Engineering management –
Engineering society –
Exploratory engineering –
F
( Fits and tolerances)---
Fa
Document 3:::
Mechanics
Document 4:::
A wax motor is a linear actuator device that converts thermal energy into mechanical energy by exploiting the phase-change behaviour of waxes. During melting, wax typically expands in volume by 5–20% .
A wide range of waxes can be used in wax motors, ranging from highly refined hydrocarbons to waxes extracted from vegetable matter. Specific examples include paraffin waxes in the straight-chain n-alkanes series. These melt and solidify over a well-defined and narrow temperature range.
Design
The principal components of a wax motor are:
An enclosed volume of wax
A plunger or stroke-rod to convert the thermo-hydraulic force from the wax into a useful mechanical output
A source of heat such as:
Electric current; typically a PTC thermistor, that heats the wax
Solar radiation; e.g. greenhouse vents
Combustion heat; e.g. excess heat from internal combustion engines
Ambient heat
A sink to reject heat energy such as:
Convection to cooler ambient air
Peltier effect device arranged to transfer heat energy away
When the heat source is energized, the wax block is heated and it expands, driving the plunger outwards by volume displacement. When the heat source is removed, the wax block contracts as it cools and the wax solidifies. For the plunger to withdraw, a biasing force is usually required to overcome the mechanical resistance of seals that contain the liquid wax. The biasing force is typically 20% to 30% of the operating force and often provided by a mechanical spring or gravity-fed dead weight applied externally into the wax motor .
Depending on the particular application, wax motors potentially have advantages over magnetic solenoids:
They provide a large hydraulic force from the expansion of the wax in the order of 4000 N (corresponding to roughly 400 kg or 900 lb at standard gravity) .
Both the application and the release of the wax motor is not instantaneous, but rather, smooth and gentle.
Because the wax motor is a resistive load rather than an inductive load, wax
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the moving piston in an engine turn?
A. crankshaft
B. muffler
C. brake
D. hammer
Answer:
|
|
sciq-664
|
multiple_choice
|
What is the main function of blood?
|
[
"respiration",
"digestion",
"to transport",
"excretion of waste"
] |
C
|
Relavent Documents:
Document 0:::
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells.
Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma.
Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated.
Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen.
Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasite
Document 1:::
Bloodstopping refers to an American folk practice once common in the Ozarks and the Appalachians, Canadian lumbercamps and the northern woods of the United States. It was believed (and still is) that certain persons, known as bloodstoppers, could halt bleeding in humans and animals by supernatural means. The most common method was to walk east and recite Ezekiel 16:6. This is referred to as the blood verse.
And when I passed by thee, and saw thee polluted in thine own blood, I said unto thee when thou wast in thy blood, Live; yea, I said unto thee when thou wast in thy blood, Live.
History
Bloodstopping was used in areas of North American where modern medicine was not reachable. Many of these communities had one or two bloodstoppers in their community. Since they were able to help when doctors were unavailable they became very popular in their community and were well respected. Bloodstopping was used mostly in the Ozarks, in the states Illinois, Missouri, and Arkansas. Each bloodstopper used their own technique to fix wounds. The person performing the bloodstopping must have been given the power to do so. The gift was mostly passed down through family (older to younger). It can only be passed down to the opposite sex. It can only be told to three people, with the third person gaining the power. The person performing it does not need to believe in it fully or be sinless since the blood verse is so powerful.
Throughout Europe Christianity was becoming the main religion. However those who lived in rural areas were not as quick to convert. They were more fond of polytheistic religions since they had been used to it for so many years. German settlers who ended up in the Appalachians had many folk beliefs about magic. At first they used stars to determine planting cycles and to predict weather. When medical treatment became scarce they turned to other forms of medicine. This is when bloodstopping became a practice.
Document 2:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 3:::
Jehovah's Witnesses believe that the Bible prohibits Christians from accepting blood transfusions. Their literature states that, "'abstaining from ... blood' means not accepting blood transfusions and not donating or storing their own blood for transfusion." The belief is based on an interpretation of scripture that differs from other Christian denominations. It is one of the doctrines for which Jehovah's Witnesses are best known.
Jehovah's Witnesses' literature teaches that their refusal of transfusions of whole blood or its four primary components—red cells, white cells, platelets and plasma—is a non-negotiable religious stand and that those who respect life as a gift from God do not try to sustain life by taking in blood, even in an emergency. Witnesses are taught that the use of fractions such as albumin, immunoglobulins and hemophiliac preparations are not absolutely prohibited and are instead a matter of personal choice.
The doctrine was introduced in 1945, and has undergone some changes since then. Members of the group who voluntarily accept a transfusion and are not deemed repentant are regarded as having disassociated themselves from the group by abandoning its doctrines and are subsequently shunned by members of the organization. Although the majority of Jehovah's Witnesses accept the doctrine, a minority do not.
The Watch Tower Society has established Hospital Information Services to provide education and facilitate bloodless surgery. This service also maintains Hospital Liaison Committees.
Doctrine
On the basis of various biblical texts, including , , and , Jehovah's Witnesses believe:
Blood represents life and is sacred to God. After it has been removed from a creature, the only use of blood that God has authorized is for the atonement of sins. When a Christian abstains from blood, they are in effect expressing faith that only the shed blood of Jesus Christ can truly redeem them and save their life.
Blood must not be eaten or transfused, even in
Document 4:::
– platelet factor 3
– platelet factor 4
– prothrombin
– thrombin
– thromboplastin
– von willebrand factor
– fibrin
– fibrin fibrinogen degradation products
– fibrin foam
– fibrin tissue adhesive
– fibrinopeptide a
– fibrinopeptide b
– glycophorin
– hemocyanin
– hemoglobins
– carboxyhemoglobin
– erythrocruorins
– fetal hemoglobi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main function of blood?
A. respiration
B. digestion
C. to transport
D. excretion of waste
Answer:
|
|
sciq-8440
|
multiple_choice
|
What is the main component of milk?
|
[
"water",
"fat",
"lactose",
"proteins"
] |
A
|
Relavent Documents:
Document 0:::
Milk fat globule membrane (MFGM) is a complex and unique structure composed primarily of lipids and proteins that surrounds milk fat globule secreted from the milk producing cells of humans and other mammals. It is a source of multiple bioactive compounds, including phospholipids, glycolipids, glycoproteins, and carbohydrates that have important functional roles within the brain and gut.
Preclinical studies have demonstrated effects of MFGM-derived bioactive components on brain structure and function, intestinal development, and immune defense. Similarly, pediatric clinical trials have reported beneficial effects on cognitive and immune outcomes. In populations ranging from premature infants to preschool-age children, dietary supplementation with MFGM or its components has been associated with improvements in cognition and behavior, gut and oral bacterial composition, fever incidence, and infectious outcomes including diarrhea and otitis media.
MFGM may also play a role in supporting cardiovascular health by modulating cholesterol and fat uptake. Clinical trials in adult populations have shown that MFGM could positively affect markers associated with cardiovascular disease including lowering serum cholesterol and triacylglycerol levels as well as blood pressure.
Origin
MFGM secretion process in milk
Milk lipids are secreted in a unique manner by lactocytes, which are specialized epithelial cells within the alveoli of the lactating mammary gland.
The process takes place in multiple stages. First, fat synthesized within the endoplasmic reticulum accumulates in droplets between the inner and outer phospholipid monolayers of the endoplasmic reticulum membrane. As these droplets increase in size, the two monolayers separate further and eventually pinch off. This leads to the surrounding of the droplet in a phospholipid monolayer that allows it to disperse within the aqueous cytoplasm. In the next stage, lipid droplets then migrate to the apical surface of the cell,
Document 1:::
The anti-inflammatory components in breast milk are those bioactive substances that confer or increase the anti-inflammatory response in a breastfeeding infant.
Document 2:::
Milk immunity is the protection provided to immune system of an infant via the biologically active components in milk, typically provided by the infant's mother.
Mammalian milk
All mammalian milk contains water, sugar, fat, vitamins, and protein with the variation within and between species and individuals differing mainly in the amount of these components. Other than the variation in quantity of these components, not a lot is known about bio-active or immune-modulating factors in many mammalian species. However, in comparison to other mammalian milk, human milk has the most oligosaccharide diversity.
Bovine milk
Ruminant mothers do not transfer immunity to their infants during pregnancy, which makes milk the first introduction to maternal immunity calves receive. Bovine milk contains both immunoglobulins A and G, but in contrast to human milk where IgA is the most abundant, IgG is more abundant. Secretory Component, IgM, both anti-inflammatory and inflammatory cytokines, and other proteins with antimicrobial functions are also present in bovine milk.
Human milk
Avian crop milk
Crop milk is a secretion from the crop of a bird that is regurgitated to feed their offspring. Birds that produce this secretion include pigeons, flamingos, emperor penguins, and doves. Pigeon milk contains some immune-modulating factors such as microbes and IgA, as well as other components with similar biological activities to mammalian milk including pigeon growth factor, and transferrin.
Document 3:::
The dairy industry in the United Kingdom is the industry of dairy farming that takes place in the UK.
Production
In Europe, UK milk production is third after France & Germany and is around the tenth highest in the world. There are around 12,000 dairy farms in the UK.
Around 14 billion litres of milk are commercially produced in the UK each year.
Britain eats around 2000 tonnes of cheese a day.
Production sites
Buckinghamshire
Arla Aylesbury, produces 10% of the UK's milk, and the world's largest milk production site
Cornwall
Davidstow Creamery, Britain's largest cheese factory, producing Cathedral City cheddar cheese
Delivery
Only 3% of milk in the UK is delivered to the door. There was an 80% drop in deliveries when supermarkets began to sell their own milk en-masse. The largest commercial deliverer of milk in the UK has around 500,000 customers; there has been a recent upswing in demand for door deliveries.
Regulation
Production was regulated by the Milk Marketing Board until 1994; its processing division is now Dairy Crest. AHDB Dairy is a central resource for the UK dairy industry.
Document 4:::
The American Dairy Science Association (ADSA) is a non-profit professional organization for the advancement of dairy science. ADSA is headquartered in Champaign, Illinois.
Consisting of 4500 members, ADSA is involved in research, education, and industry relations. Areas of ADSA focus include:
care and nutrition of dairy animals;
management, economics and marketing of dairy farms and product manufacturing;
sanitation throughout the dairy industry; and,
processing of dairy-based products, including processing and foods manufacturing (milk, cheese, yogurt, and ice cream).
ADSA's top priorities are the Journal of Dairy Science, annual meetings, scientific liaisons with other organizations and agencies, and international development. ADSA is attempting to add value to potential new members through an emphasis on "integration of dairy disciplines from the farm to the table."
History
In the summer of 1905, the Graduate School of Agriculture was held at Ohio State University. Professor Wilber J. Fraser of the University of Illinois at Urbana-Champaign suggested a permanent "Dairy Instructors and Investigators Association". Attendees decided that Professor Fraser should discuss the matter further with university leaders and, if enough interest was indicated, call an organizational meeting at the 1906 Graduate School of Agriculture to be held at the University of Illinois, Urbana. Apparently, sufficient interest was raised, because Professor Fraser called interested parties to attend an inaugural meeting on July 17, 1906. Although 19 persons appear on the photograph of that first meeting, records indicate only 17 or 18 charter members joined what was then called "National Association of Dairy Instructors and Investigators". At this time, dairy schools existed at Cornell, Iowa State, Wisconsin, Purdue, Penn State, Ohio State, Missouri, Minnesota, Guelph (Ontario), and Illinois.
The second meeting was at the National Dairy Show in Chicago on 11 Oct 1907. Only 11 members
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main component of milk?
A. water
B. fat
C. lactose
D. proteins
Answer:
|
|
sciq-4485
|
multiple_choice
|
The number of valence electrons determines variation of what property in nonmetals?
|
[
"turbidity",
"permeability",
"reactivity",
"density"
] |
C
|
Relavent Documents:
Document 0:::
Nonmetals show more variability in their properties than do metals. Metalloids are included here since they behave predominately as chemically weak nonmetals.
Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of most metals.
Chemically, the nonmetals mostly have higher ionisation energies, higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic.
Properties
Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness
Group 1
Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10−5 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of
Document 1:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 2:::
Quantum capacitance, also called chemical capacitance and electrochemical capacitance , is a quantity first introduced by Serge Luryi (1988), and is defined as the variation of electrical charge with respect to the variation of electrochemical potential , i.e., .
In the simplest example, if you make a parallel-plate capacitor where one or both of the plates has a low density of states, then the capacitance is not given by the normal formula for parallel-plate capacitors, . Instead, the capacitance is lower, as if there was another capacitor in series, . This second capacitance, related to the density of states of the plates, is the quantum capacitance and is represented by . The equivalent capacitance is called electrochemical capacitance .
Quantum capacitance is especially important for low-density-of-states systems, such as a 2-dimensional electronic system in a semiconductor surface or interface or graphene, and can be used to construct an experimental energy functional of electron density.
Overview
When a voltmeter is used to measure an electronic device, it does not quite measure the pure electric potential (also called Galvani potential). Instead, it measures the electrochemical potential, also called "fermi level difference", which is the total free energy difference per electron, including not only its electric potential energy but also all other forces and influences on the electron (such as the kinetic energy in its wavefunction). For example, a p-n junction in equilibrium, there is a galvani potential (built-in potential) across the junction, but the "voltage" across it is zero (in the sense that a voltmeter would measure zero voltage).
In a capacitor, there is a relation between charge and voltage, . As explained above, we can divide the voltage into two pieces: The galvani potential, and everything else.
In a traditional metal-insulator-metal capacitor, the galvani potential is the only relevant contribution. Therefore, the capacitance can be c
Document 3:::
A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation.
Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic.
The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere.
The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production.
Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals.
Definition and applicable elements
Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise
Document 4:::
In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron.
The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell.
An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond.
Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.
Overview
Electron configuration
The electrons that determine valence – how an atom reacts chemically – are those with the highest energy.
For a main-group element, the valence electrons are defined as those electrons residing in the e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The number of valence electrons determines variation of what property in nonmetals?
A. turbidity
B. permeability
C. reactivity
D. density
Answer:
|
|
sciq-9834
|
multiple_choice
|
What happens if a molecule forms strong bonds to the catalyst?
|
[
"molecules gets poisoned",
"membrane gets poisoned",
"catalyst gets poisoned",
"surface gets poisoned"
] |
C
|
Relavent Documents:
Document 0:::
Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds.
In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species.
Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties.
The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent.
Uses
An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface.
Self-assembled monolayers
Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface.
Gas-surface chemisorption
Adsorption kinetics
As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would
Document 1:::
In chemistry, bond cleavage, or bond fission, is the splitting of chemical bonds. This can be generally referred to as dissociation when a molecule is cleaved into two or more fragments.
In general, there are two classifications for bond cleavage: homolytic and heterolytic, depending on the nature of the process. The triplet and singlet excitation energies of a sigma bond can be used to determine if a bond will follow the homolytic or heterolytic pathway. A metal−metal sigma bond is an exception because the bond's excitation energy is extremely high, thus cannot be used for observation purposes.
In some cases, bond cleavage requires catalysts. Due to the high bond-dissociation energy of C−H bonds, around , a large amount of energy is required to cleave the hydrogen atom from the carbon and bond a different atom to the carbon.
Homolytic cleavage
In homolytic cleavage, or homolysis, the two electrons in a cleaved covalent bond are divided equally between the products. This process is also known as homolytic fission or radical fission. The bond-dissociation energy of a bond is the amount of energy required to cleave the bond homolytically. This enthalpy change is one measure of bond strength.
The triplet excitation energy of a sigma bond is the energy required for homolytic dissociation, but the actual excitation energy may be higher than the bond-dissociation energy due to the repulsion between electrons in the triplet state.
Heterolytic cleavage
In heterolytic cleavage, or heterolysis, the bond breaks in such a fashion that the originally-shared pair of electrons remain with one of the fragments. Thus, a fragment gains an electron, having both bonding electrons, while the other fragment loses an electron. This process is also known as ionic fission.
The singlet excitation energy of a sigma bond is the energy required for heterolytic dissociation, but the actual singlet excitation energy may be lower than the bond-dissociation energy of heterolysis as a resu
Document 2:::
In molecular biology, a scissile bond is a covalent chemical bond that can be broken by an enzyme. Examples would be the cleaved bond in the self-cleaving hammerhead ribozyme or the peptide bond of a substrate cleaved by a peptidase.
Document 3:::
In chemistry, a radical, also known as a free radical, is an atom, molecule, or ion that has at least one unpaired valence electron.
With some exceptions, these unpaired electrons make radicals highly chemically reactive. Many radicals spontaneously dimerize. Most organic radicals have short lifetimes.
A notable example of a radical is the hydroxyl radical (HO·), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene (꞉) which have two unpaired electrons.
Radicals may be generated in a number of ways, but typical methods involve redox reactions, Ionizing radiation, heat, electrical discharges, and electrolysis are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations.
Radicals are important in combustion, atmospheric chemistry, polymerization, plasma chemistry, biochemistry, and many other chemical processes. A majority of natural products are generated by radical-generating enzymes. In living organisms, the radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling. A radical may be trapped within a solvent cage or be otherwise bound.
Formation
Radicals are either (1) formed from spin-paired molecules or (2) from other radicals. Radicals are formed from spin-paired molecules through homolysis of weak bonds or electron transfer, also known as reduction. Radicals are formed from other radicals through substitution, addition, and elimination reactions.
Radical formation from spin-paired molecules
Homolysis
Homolysis makes two new radicals from a spin-paired molecule by breaking a covalent bond, leaving each of the fragments with one of the electrons in the bond. Bec
Document 4:::
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding.
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds.
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks.
Types
Molecular binding can be classified into the following types:
Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible
Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs
Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place.
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens if a molecule forms strong bonds to the catalyst?
A. molecules gets poisoned
B. membrane gets poisoned
C. catalyst gets poisoned
D. surface gets poisoned
Answer:
|
|
sciq-2727
|
multiple_choice
|
Chloroplasts are present only in cells of eukaryotes capable of what process?
|
[
"digestion",
"sexual reproduction",
"hydrolysis",
"photosynthesis"
] |
D
|
Relavent Documents:
Document 0:::
In contrast to the Cladophorales where nuclei are organized in regularly spaced cytoplasmic domains, the cytoplasm of Bryopsidales exhibits streaming, enabling transportation of organelles, transcripts and nutrients across the plant.
The Sphaeropleales also contain several common freshwat
Document 1:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 2:::
Organelle biogenesis is the biogenesis, or creation, of cellular organelles in cells. Organelle biogenesis includes the process by which cellular organelles are split between daughter cells during mitosis; this process is called organelle inheritance.
Discovery
Following the discovery of cellular organelles in the nineteenth century, little was known about their function and synthesis until the development of electron microscopy and subcellular fractionation in the twentieth century. This allowed experiments on the function, structure, and biogenesis of these organelles to commence.
Mechanisms of protein sorting and retrieval have been found to give organelles their characteristic composition. It is known that cellular organelles can come from preexisting organelles; however, it is a subject of controversy whether organelles can be created without a preexisting one.
Process
Several processes are known to have developed for organelle biogenesis. These can range from de novo synthesis to the copying of a template organelle; the formation of an organelle 'from scratch' and using a preexisting organelle as a template to manufacture an organelle, respectively. The distinct structures of each organelle are thought to be caused by the different mechanisms of the processes which create them and the proteins that they are made up of. Organelles may also be 'split' between two cells during the process of cellular division (known as organelle inheritance), where the organelle of the parent cell doubles in size and then splits with each half being delivered to their respective daughter cells.
The process of organelle biogenesis is known to be regulated by specialized transcription networks that modulate the expression of the genes that code for specific organellar proteins. In order for organelle biogenesis to be carried out properly, the specific genes coding for the organellar proteins must be transcribed properly and the translation of the resulting mRNA must be succes
Document 3:::
Transfer cells are specialized parenchyma cells that have an increased surface area, due to infoldings of the plasma membrane. They facilitate the transport of sugars from a sugar source, mainly mature leaves, to a sugar sink, often developing leaves or fruits. They are found in nectaries of flowers and some carnivorous plants.
Transfer cells are specially found in plants in the region of absorption or secretion of nutrients.
The term transfer cell was coined by Brian Gunning and John Stewart Pate. Their presence is generally correlated with the existence of extensive solute influxes across the plasma membrane.
Document 4:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Chloroplasts are present only in cells of eukaryotes capable of what process?
A. digestion
B. sexual reproduction
C. hydrolysis
D. photosynthesis
Answer:
|
|
sciq-1187
|
multiple_choice
|
What would you need to see most cells?
|
[
"microscope",
"ultraviolet",
"mirror",
"infrared"
] |
A
|
Relavent Documents:
Document 0:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 1:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
A microbiologist (from Greek ) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.
For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also not include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual proper
Document 4:::
Automated tissue image analysis or histopathology image analysis (HIMA) is a process by which computer-controlled automatic test equipment is used to evaluate tissue samples, using computations to derive quantitative measurements from an image to avoid subjective errors.
In a typical application, automated tissue image analysis could be used to measure the aggregate activity of cancer cells in a biopsy of a cancerous tumor taken from a patient. In breast cancer patients, for example, automated tissue image analysis may be used to test for high levels of proteins known to be present in more aggressive forms of breast cancers.
Applications
Automated tissue imaging analysis can significantly reduce uncertainty in characterizing tumors compared to evaluations done by histologists, or improve the prediction rate of recurrence of some cancers. As it is a digital system, suitable for networking, it also facilitates cooperative efforts between distant sites. Systems for automatically analyzing tissue samples also reduce costs and save time.
High-performance CCD cameras are used for acquiring the digital images. Coupled with advanced widefield microscopes and various algorithms for image restoration, this approach can provide better results than confocal techniques at comparable speeds and lower costs.
Processes
The United States Food and Drug Administration classifies these systems as medical devices, under the general instrumentation category of automatic test equipment.
ATIS have seven basic processes (sample preparation, image acquisition, image analysis, results reporting, data storage, network communication, and self-system diagnostics) and realization of these functions highly accurate hardware and well-integrated, complex, and expensive software.
Preparation
Specimen preparation is critical for evaluating the tumor in the automated system. In the first part of the preparation process the biopsied tissue is cut to an appropriate size (typically 4 mm), fixed in b
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What would you need to see most cells?
A. microscope
B. ultraviolet
C. mirror
D. infrared
Answer:
|
|
sciq-10351
|
multiple_choice
|
What element enters the body when an organism breathes?
|
[
"water",
"helium",
"oxygen",
"methane"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 2:::
A trace element is a chemical element of a minute quantity, a trace amount, especially used in referring to a micronutrient, but is also used to refer to minor elements in the composition of a rock, or other chemical substance.
In nutrition, trace elements are classified into two groups: essential trace elements, and non-essential trace elements. Essential trace elements are needed for many physiological and biochemical processes in both plants and animals. Not only do trace elements play a role in biological processes but they also serve as catalysts to engage in redox – oxidation and reduction mechanisms. Trace elements of some heavy metals have a biological role as essential micronutrients.
Types
The two types of trace element in biochemistry are classed as essential or non-essential.
Essential trace elements
An essential trace element is a dietary element, a mineral that is only needed in minute quantities for the proper growth, development, and physiology of the organism. The essential trace elements are those that are required to perform vital metabolic activities in organisms. Essential trace elements in human nutrition, and other animals include iron (Fe) (hemoglobin), copper (Cu) (respiratory pigments), cobalt (Co) (Vitamin B12), iodine, manganese (Mn) and zinc (Zn) (enzymes). Although they are essential, they become toxic at high concentrations.
Non-essential trace elements
Non-essential trace elements include silver (Ag), arsenic (As), cadmium (Cd), chromium (Cr), mercury (Hg), lead (Pb), and tin (Sn), and have no known biological function, with toxic effects even at low concentration.
The structural components of cells and tissues that are required in the diet in gram quantities daily are known as bulk elements.
See also
Antinutrient
Bowen's Kale
Geotraces
List of micronutrients
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What element enters the body when an organism breathes?
A. water
B. helium
C. oxygen
D. methane
Answer:
|
|
sciq-692
|
multiple_choice
|
What is used to recrystallize excess dissolved solute in a supersaturated solution?
|
[
"seed crystal",
"energy crystal",
"starter crystal",
"fertilizer crystal"
] |
A
|
Relavent Documents:
Document 0:::
In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
History
Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Occurrence and examples
Solid precipitate, liquid solvent
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases wit
Document 1:::
In chemistry, recrystallization is a technique used to purify chemicals. By dissolving a mixture of a compound and impurities in an appropriate solvent, either the desired compound or impurities can be removed from the solution, leaving the other behind. It is named for the crystals often formed when the compound precipitates out. Alternatively, recrystallization can refer to the natural growth of larger ice crystals at the expense of smaller ones.
Chemistry
In chemistry, recrystallization is a procedure for purifying compounds. The most typical situation is that a desired "compound A" is contaminated by a small amount of "impurity B". There are various methods of purification that may be attempted (see Separation process), recrystallization being one of them. There are also different recrystallization techniques that can be used such as:
Single-solvent recrystallization
Typically, the mixture of "compound A" and "impurity B" is dissolved in the smallest amount of hot solvent to fully dissolve the mixture, thus making a saturated solution. The solution is then allowed to cool. As the solution cools the solubility of compounds in the solution drops. This results in the desired compound dropping (recrystallizing) from the solution. The slower the rate of cooling, the bigger the crystals form.
In an ideal situation the solubility product of the impurity, B, is not exceeded at any temperature. In that case, the solid crystals will consist of pure A and all the impurities will remain in the solution. The solid crystals are collected by filtration and the filtrate is discarded. If the solubility product of the impurity is exceeded, some of the impurities will co-precipitate. However, because of the relatively low concentration of the impurity, its concentration in the precipitated crystals will be less than its concentration in the original solid. Repeated recrystallization will result in an even purer crystalline precipitate. The purity is checked after each recrysta
Document 2:::
Semper rehydration solution is a mixture used for the management of dehydration. Each liter of Semper rehydration solution contains 189 mmol glucose, 40 mmol Na+, 35 mmol Cl−, 20 mmol K+ and 25 mmol HCO3−.
Document 3:::
In chemistry, fractional crystallization is a method of refining substances based on differences in their solubility. It fractionates via differences in crystallization (forming of crystals). If a mixture of two or more substances in solution are allowed to crystallize, for example by allowing the temperature of the solution to decrease or increase, the precipitate will contain more of the least soluble substance. The proportion of components in the precipitate will depend on their solubility products. If the solubility products are very similar, a cascade process will be needed to effectuate a complete separation.
This technique is often used in chemical engineering to obtain pure substances, or to recover saleable products from waste solutions.
Fractional crystallization can be used to separate solid-solid mixtures. An example of this is separating KNO3 and KClO3.
See also
Cold Water Extraction
Fractional crystallization (geology)
Fractional freezing
Laser-heated pedestal growth
Pumpable ice technology
Recrystallization (chemistry)
Seed crystal
Single crystal
Document 4:::
Sucrose octapropionate is a chemical compound with formula or , an eight-fold ester of sucrose and propionic acid. Its molecule can be described as that of sucrose with its eight hydroxyl groups – replaced by propionate groups –. It is a crystalline colorless solid. It is also called sucrose octapropanoate or octapropionyl sucrose.
History
The preparation of sucrose octapropionate was first described in 1933 by Gerald J. Cox and others.
Preparation
The compound can be prepared by the reaction of sucrose with propionic anhydride in the melt state or at room temperature, over several days, in anhydrous pyridine.
Properties
Sucrose octapropionate is only slightly soluble in water (less than 0.1 g/L) but is soluble in many common organic solvents such as isopropanol and ethanol, from which it can be crystallized by evaporation of the solvent.
The crystalline form melts at 45.4–45.5 °C into a viscous liquid (47.8 poises at 48.9 °C), that becomes a clear glassy solid on cooling, but easily recrystallizes.
The density of the glassy form is 1.185 kg/L (at 20 °C). It is an optically active compound with [α]20D +53°.
The compound can be vacuum distilled at 280–290 °C and 0.05 to 0.07 torr.
Applications
Distillation of fully esterified propionates has been proposed as a method for the separation and identification of sugars.
While the crystallinity of the pure compound prevents its use as a plasticizer it was found that incompletely esterified variants (with 1 to 2 remaining hydroxyls per molecule) will not crystallize, and therefore can be considered for that application.
See also
Sucrose octaacetate
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is used to recrystallize excess dissolved solute in a supersaturated solution?
A. seed crystal
B. energy crystal
C. starter crystal
D. fertilizer crystal
Answer:
|
|
sciq-7495
|
multiple_choice
|
During what in the small intestine do rings of smooth muscle repeatedly contract and then relax?
|
[
"mitosis",
"contraction",
"segmentation",
"compression"
] |
C
|
Relavent Documents:
Document 0:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 1:::
The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used.
Structure
It usually has two layers of smooth muscle:
inner and "circular"
outer and "longitudinal"
However, there are some exceptions to this pattern.
In the stomach there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer.
In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle.
In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal.
In the ureter the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer.
The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract:
in the pylorus of the stomach, it forms the pyloric sphincter.
in the anal canal, it forms the internal anal sphincter.
In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli.
The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs.
Function
The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis.
Document 2:::
The basal or basic electrical rhythm (BER) or electrical control activity (ECA) is the spontaneous depolarization and repolarization of pacemaker cells known as interstitial cells of Cajal (ICCs) in the smooth muscle of the stomach, small intestine, and large intestine. This electrical rhythm is spread through gap junctions in the smooth muscle of the GI tract. These pacemaker cells, also called the ICCs, control the frequency of contractions in the gastrointestinal tract. The cells can be located in either the circular or longitudinal layer of the smooth muscle in the GI tract; circular for the small and large intestine, longitudinal for the stomach. The frequency of contraction differs at each location in the GI tract beginning with 3 per minute in the stomach, then 12 per minute in the duodenum, 9 per minute in the ileum, and a normally low one contraction per 30 minutes in the large intestines that increases 3 to 4 times a day due to a phenomenon called mass movement. The basal electrical rhythm controls the frequency of contraction but additional neuronal and hormonal controls regulate the strength of each contraction.
Physiology
Smooth muscle within the GI tract causes the involuntary peristaltic motion that moves consumed food down the esophagus and towards the rectum. The smooth muscle throughout most of the GI tract is divided into two layers: an outer longitudinal layer and an inner circular layer. Both layers of muscle are located within the muscularis externa. The stomach has a third layer: an innermost oblique layer.
The physical contractions of the smooth muscle cells can be caused by action potentials in efferent motor neurons of the enteric nervous system, or by receptor mediated calcium influx. These efferent motor neurons of the enteric nervous system are cholinergic and adrenergic neurons. The inner circular layer is innervated by both excitatory and inhibitory motor neurons, while the outer longitudinal layer is innervated by mainly excitato
Document 3:::
The internal anal sphincter, IAS, (or sphincter ani internus) is a ring of smooth muscle that surrounds about 2.5–4.0 cm of the anal canal. It is about 5 mm thick, and is formed by an aggregation of the smooth (involuntary) circular muscle fibers of the rectum. it terminates distally about 6 mm from the anal orifice.
The internal anal sphincter aids the sphincter ani externus to occlude the anal aperture and aids in the expulsion of the feces. Its action is entirely involuntary. It is normally in a state of continuous maximal contraction to prevent leakage of faeces or gases. Sympathetic stimulation stimulates and maintains the sphincter's contraction, and parasympathetic stimulation inhibits it. It becomes relaxed in response to distention of the rectal ampulla, requiring voluntary contraction of the puborectalis and external anal sphincter to maintain continence.
Anatomy
The internal anal sphincter is the specialised thickened terminal portion of the inner circular layer of smooth muscle of the large intestine. It extends from the pectinate line (anorectal junction) proximally to just proximal to the anal orifice distally (the distal termination is palpable). Its muscle fibres are arranged in a spiral (rather than a circular) manner.
At its distal extremity, it is in contact with but separate from the external anal sphincter.
Innervation
The sphincter receives extrinsic autonomic innervation via the inferior hypogastric plexus, with sympathetic innervation derived from spinal levels L1-L2, and parasympathetic innervation derived from S2-S4.
The internal anal sphincter is not innervated by the pudendal nerve (which provides motor and sensory innervation to the external anal sphincter).
Function
The sphincter is contracted in its resting state, but reflexively relaxes in certain contexts (most notably during defecation).
Transient relaxation of its proximal portion occurs with rectal distension and post-prandial rectal contraction (the recto-anal inhibitory
Document 4:::
Gastrointestinal physiology is the branch of human physiology that addresses the physical function of the gastrointestinal (GI) tract. The function of the GI tract is to process ingested food by mechanical and chemical means, extract nutrients and excrete waste products. The GI tract is composed of the alimentary canal, that runs from the mouth to the anus, as well as the associated glands, chemicals, hormones, and enzymes that assist in digestion. The major processes that occur in the GI tract are: motility, secretion, regulation, digestion and circulation. The proper function and coordination of these processes are vital for maintaining good health by providing for the effective digestion and uptake of nutrients.
Motility
The gastrointestinal tract generates motility using smooth muscle subunits linked by gap junctions. These subunits fire spontaneously in either a tonic or a phasic fashion. Tonic contractions are those contractions that are maintained from several minutes up to hours at a time. These occur in the sphincters of the tract, as well as in the anterior stomach. The other type of contractions, called phasic contractions, consist of brief periods of both relaxation and contraction, occurring in the posterior stomach and the small intestine, and are carried out by the muscularis externa.
Motility may be overactive (hypermotility), leading to diarrhea or vomiting, or underactive (hypomotility), leading to constipation or vomiting; either may cause abdominal pain.
Stimulation
The stimulation for these contractions likely originates in modified smooth muscle cells called interstitial cells of Cajal. These cells cause spontaneous cycles of slow wave potentials that can cause action potentials in smooth muscle cells. They are associated with the contractile smooth muscle via gap junctions. These slow wave potentials must reach a threshold level for the action potential to occur, whereupon Ca2+ channels on the smooth muscle open and an action potential
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
During what in the small intestine do rings of smooth muscle repeatedly contract and then relax?
A. mitosis
B. contraction
C. segmentation
D. compression
Answer:
|
|
sciq-1188
|
multiple_choice
|
The intermolecular structure of what has spaces that are not present in liquid water?
|
[
"ice",
"vapor",
"distillate",
"condensation"
] |
A
|
Relavent Documents:
Document 0:::
A micellar cubic phase is a lyotropic liquid crystal phase formed when the concentration of micelles dispersed in a solvent (usually water) is sufficiently high that they are forced to pack into a structure having a long-ranged positional (translational) order. For example, spherical micelles a cubic packing of a body-centered cubic lattice. Normal topology micellar cubic phases, denoted by the symbol I1, are the first lyotropic liquid crystalline phases that are formed by type I amphiphiles. The amphiphiles' hydrocarbon tails are contained on the inside of the micelle and hence the polar-apolar interface of the aggregates has a positive mean curvature, by definition (it curves away from the polar phase). The first pure surfactant system found to exhibit three different type I (oil-in-water) micellar cubic phases was observed in the dodecaoxyethylene mono-n-dodecyl ether (C12EO12)/water system.
Inverse topology micellar cubic phases (such as the Fd3m phase) are observed for some type II amphiphiles at very high amphiphile concentrations. These aggregates, in which water is the minority phase, have a polar-apolar interface with a negative mean curvature. The structures of the normal topology micellar cubic phases that are formed by some types of amphiphiles (e.g. the oligoethyleneoxide monoalkyl ether series of non-ionic surfactants are the subject of debate. Micellar cubic phases are isotropic phases but are distinguished from micellar solutions by their very high viscosity. When thin film samples of micellar cubic phases are viewed under a polarising microscope they appear dark and featureless. Small air bubbles trapped in these preparations tend to appear highly distorted and occasionally have faceted surfaces. A reversed micellar cubic phase has been observed, although it is much less common. It was observed that a reverse micellar cubic phase with Fd3m (Q227) symmetry formed in a ternary system of an amphiphilic diblock copolymer (EO17BO10, where EO represents
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Polymorphism in biophysics is the ability of lipids to aggregate in a variety of ways, giving rise to structures of different shapes, known as "phases". This can be in the form of spheres of lipid molecules (micelles), pairs of layers that face one another (lamellar phase, observed in biological systems as a lipid bilayer), a tubular arrangement (hexagonal), or various cubic phases (Fdm, Imm, Iam, Pnm, and Pmm being those discovered so far). More complicated aggregations have also been observed, such as rhombohedral, tetragonal and orthorhombic phases.
It forms an important part of current academic research in the fields of membrane biophysics (polymorphism), biochemistry (biological impact) and organic chemistry (synthesis).
Determination of the topology of a lipid system is possible by a number of methods, the most reliable of which is x-ray diffraction. This uses a beam of x-rays that are scattered by the sample, giving a diffraction pattern as a set of rings. The ratio of the distances of these rings from the central point indicates which phase(s) are present.
The structural phase of the aggregation is influenced by the ratio of lipids present, temperature, hydration, pressure and ionic strength (and type).
Hexagonal phases
In lipid polymorphism, if the packing ratio of lipids is greater or less than one, lipid membranes can form two separate hexagonal phases, or nonlamellar phases, in which long, tubular aggregates form according to the environment in which the lipid is introduced.
Hexagonal I phase (HI)
This phase is favored in detergent-in-water solutions and has a packing ratio of less than one. The micellar population in a detergent/water mixture cannot increase without limit as the detergent to water ratio increases. In the presence of low amounts of water, lipids that would normally form micelles will form larger aggregates in the form of micellar tubules in order to satisfy the requirements of the hydrophobic effect. These aggregates can be t
Document 3:::
Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane.
Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others.
There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology.
See also
Interface (matter)
Electrokinetic phenomena
Surface science
Document 4:::
A two-dimensional liquid (2D liquid) is a collection of objects constrained to move in a planar or other two-dimensional space in a liquid state.
Relations with 3D liquids
The movement of the particles in a 2D liquid is similar to 3D, but with limited degrees of freedom. E.g. rotational motion can be limited to rotation about only one axis, in contrast to a 3D liquid, where rotation of molecules about two or three axis would be possible.
The same is true for the translational motion. The particles in 2D liquids can move in a 2D plane, whereas the particles is a 3D liquid can move in three directions inside the 3D volume.
Vibrational motion is in most cases not constrained in comparison to 3D.
The relations with other states of aggregation (see below) are also analogously in 2D and 3D.
Relation to other states of aggregation
2D liquids are related to 2D gases. If the density of a 2D liquid is decreased, a 2D gas is formed. This was observed by scanning tunnelling microscopy under ultra-high vacuum (UHV) conditions for molecular adsorbates.
2D liquids are related to 2D solids. If the density of a 2D liquid is increased, the rotational degree of freedom is frozen and a 2D solid is created.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The intermolecular structure of what has spaces that are not present in liquid water?
A. ice
B. vapor
C. distillate
D. condensation
Answer:
|
|
sciq-9127
|
multiple_choice
|
What is the most useful quantity for counting particles?
|
[
"the vector",
"the mole",
"the periodic table",
"the coefficient"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
In physical chemistry, there are numerous quantities associated with chemical compounds and reactions; notably in terms of amounts of substance, activity or concentration of a substance, and the rate of reaction. This article uses SI units.
Introduction
Theoretical chemistry requires quantities from core physics, such as time, volume, temperature, and pressure. But the highly quantitative nature of physical chemistry, in a more specialized way than core physics, uses molar amounts of substance rather than simply counting numbers; this leads to the specialized definitions in this article. Core physics itself rarely uses the mole, except in areas overlapping thermodynamics and chemistry.
Notes on nomenclature
Entity refers to the type of particle/s in question, such as atoms, molecules, complexes, radicals, ions, electrons etc.
Conventionally for concentrations and activities, square brackets [ ] are used around the chemical molecular formula. For an arbitrary atom, generic letters in upright non-bold typeface such as A, B, R, X or Y etc. are often used.
No standard symbols are used for the following quantities, as specifically applied to a substance:
the mass of a substance m,
the number of moles of the substance n,
partial pressure of a gas in a gaseous mixture p (or P),
some form of energy of a substance (for chemistry enthalpy H is common),
entropy of a substance S
the electronegativity of an atom or chemical bond χ.
Usually the symbol for the quantity with a subscript of some reference to the quantity is used, or the quantity is written with the reference to the chemical in round brackets. For example, the mass of water might be written in subscripts as mH2O, mwater, maq, mw (if clear from context) etc., or simply as m(H2O). Another example could be the electronegativity of the fluorine-fluorine covalent bond, which might be written with subscripts χF-F, χFF or χF-F etc., or brackets χ(F-F), χ(FF) etc.
Neither is standard. For the purpose of this a
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the most useful quantity for counting particles?
A. the vector
B. the mole
C. the periodic table
D. the coefficient
Answer:
|
|
sciq-1961
|
multiple_choice
|
What law states that energy cannot be created or destroyed?
|
[
"conservation of energy",
"conservation of force",
"difference of energy",
"deposit of energy"
] |
A
|
Relavent Documents:
Document 0:::
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
Classically, conservation of energy was distinct from conservation of mass. However, special relativity shows that mass is related to energy and vice versa by , the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass can itself be converted to pure energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.
Given the stationary-action principle, conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time.
A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, conservation of energy can arguably be violated by general relativity on the cosmological scale.
History
Ancient philosophers as far back as Thales of Miletus 550 BCE had inklings of the conservation of some underlying substance of which ev
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time, as the system's mass cannot change, so the quantity can neither be added nor be removed. Therefore, the quantity of mass is conserved over time.
The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products.
The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation in chemical reactions was primarily demonstrated in the 17th century and finally confirmed by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry.
In reality, the conservation of mass only holds approximately and is considered part of a series of assumptions in classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass–energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics.
Mass is also not generally conserved in open systems. Such is the case when various forms of energy and matter are allowed into, or out of, the system. However, unless radioactivity or nuclear r
Document 3:::
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t
Document 4:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What law states that energy cannot be created or destroyed?
A. conservation of energy
B. conservation of force
C. difference of energy
D. deposit of energy
Answer:
|
|
sciq-7879
|
multiple_choice
|
What is the term for the small particles that rocks are worn down to by water and wind?
|
[
"organisms",
"pebbles",
"sediments",
"fragments"
] |
C
|
Relavent Documents:
Document 0:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 1:::
Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Mechanisms
Aeolian
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion
Document 2:::
Debris (, ) is rubble, wreckage, ruins, litter and discarded garbage/refuse/trash, scattered remains of something destroyed, or, as in geology, large rock fragments left by a melting glacier, etc. Depending on context, debris can refer to a number of different things. The first apparent use of the French word in English is in a 1701 description of the army of Prince Rupert upon its retreat from a battle with the army of Oliver Cromwell, in England.
Disaster
In disaster scenarios, tornadoes leave behind large pieces of houses and mass destruction overall. This debris also flies around the tornado itself when it is in progress. The tornado's winds capture debris it kicks up in its wind orbit, and spins it inside its vortex. The tornado's wind radius is larger than the funnel itself. Tsunamis and hurricanes also bring large amounts of debris, such as Hurricane Katrina in 2005 and Hurricane Sandy in 2012. Earthquakes rock cities to rubble debris.
Geological
In geology, debris usually applies to the remains of geological activity including landslides, volcanic explosions, avalanches, mudflows or Glacial lake outburst floods (Jökulhlaups) and moraine, lahars, and lava eruptions. Geological debris sometimes moves in a stream called a debris flow. When it accumulates at the base of hillsides, it can be called "talus" or "scree".
In mining, debris called attle usually consists of rock fragments which contain little or no ore.
Marine
Marine debris applies to floating garbage such as bottles, cans, styrofoam, cruise ship waste, offshore oil and gas exploration and production facilities pollution, and fishing paraphernalia from professional and recreational boaters. Marine debris is also called litter or flotsam and jetsam. Objects that can constitute marine debris include used automobile tires, detergent bottles, medical wastes, discarded fishing line and nets, soda cans, and bilge waste solids.
In addition to being unsightly, it can pose a serious threat to marine lif
Document 3:::
The Physics of Blown Sand and Desert Dunes is a scientific book written by Ralph A. Bagnold. The book laid the foundations of the scientific investigation of the transport of sand by wind. It also discusses the formation and movement of sand dunes in the Libyan Desert. During his expeditions into the Libyan Desert, Bagnold had been fascinated by the shapes of the sand dunes, and after returning to England he built a wind tunnel and conducted the experiments which are the basis of the book.
Bagnold finished writing the book in 1939, and it was first published on 26 June 1941. A reprinted version, with minor revisions by Bagnold, was published by Chapman and Hall in 1953, and reprinted again in 1971. The book was reissued by Dover Publications in 2005.
The book explores the movement of sand in desert environments, with a particular emphasis on how wind affects the formation and movement of dunes and ripples. Bagnold's interest in this subject was spurred by his extensive desert expeditions, during which he observed various sand storms. One pivotal observation was that the movement of sand, unlike that of dust, predominantly occurs near the ground, within a height of one metre, and was less influenced by large-scale eddy currents in the air.
The book emphasises the feasibility of replicating these natural phenomena under controlled conditions in a laboratory. By using a wind tunnel, Bagnold sought to gain a deeper understanding of the physics governing the interaction between airstreams and sand grains, and vice versa. His aim was to ensure that findings from controlled experiments mirrored real-world conditions, with verifications of these laboratory results conducted through field observations in the Libyan Desert in the late 1930s.
Bagnold delineates his research into two distinct stages. The first, which constitutes the primary focus of the book, investigates the dynamics of sand movement across mostly flat terrains. This includes understanding how sand is l
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the small particles that rocks are worn down to by water and wind?
A. organisms
B. pebbles
C. sediments
D. fragments
Answer:
|
|
sciq-9592
|
multiple_choice
|
What are antacids comprised of?
|
[
"solids",
"gases",
"acids",
"bases"
] |
D
|
Relavent Documents:
Document 0:::
Phytochemistry is the study of phytochemicals, which are chemicals derived from plants. Phytochemists strive to describe the structures of the large number of secondary metabolites found in plants, the functions of these compounds in human and plant biology, and the biosynthesis of these compounds. Plants synthesize phytochemicals for many reasons, including to protect themselves against insect attacks and plant diseases. The compounds found in plants are of many kinds, but most can be grouped into four major biosynthetic classes: alkaloids, phenylpropanoids, polyketides, and terpenoids.
Phytochemistry can be considered a subfield of botany or chemistry. Activities can be led in botanical gardens or in the wild with the aid of ethnobotany. Phytochemical studies directed toward human (i.e. drug discovery) use may fall under the discipline of pharmacognosy, whereas phytochemical studies focused on the ecological functions and evolution of phytochemicals likely fall under the discipline of chemical ecology. Phytochemistry also has relevance to the field of plant physiology.
Techniques
Techniques commonly used in the field of phytochemistry are extraction, isolation, and structural elucidation (MS,1D and 2D NMR) of natural products, as well as various chromatography techniques (MPLC, HPLC, and LC-MS).
Phytochemicals
Many plants produce chemical compounds for defence against herbivores. The major classes of pharmacologically active phytochemicals are described below, with examples of medicinal plants that contain them. Human settlements are often surrounded by weeds containing phytochemicals, such as nettle, dandelion and chickweed.
Many phytochemicals, including curcumin, epigallocatechin gallate, genistein, and resveratrol are pan-assay interference compounds and are not useful in drug discovery.
Alkaloids
Alkaloids are bitter-tasting chemicals, widespread in nature, and often toxic. There are several classes with different modes of action as drugs, both recre
Document 1:::
An active ingredient is any ingredient that provides biologically active or other direct effect in the diagnosis, cure, mitigation, treatment, or prevention of disease or to affect the structure or any function of the body of humans or animals.
The similar terms active pharmaceutical ingredient (abbreviated as API) and bulk active are also used in medicine. The term active substance may be used for natural products.
Some medication products can contain more than one active ingredient. The traditional word for the active pharmaceutical agent is pharmacon or pharmakon (from , adapted from pharmacos) which originally denoted a magical substance or drug.
The terms active constituent or active principle are often chosen when referring to the active substance of interest in a plant (such as salicylic acid in willow bark or arecoline in areca nuts), since the word "ingredient" can be taken to connote a sense of human agency (that is, something that a person combines with other substances), whereas the natural products present in plants were not added by any human agency but rather occurred naturally ("a plant doesn't have ingredients").
In contrast with the active ingredients, the inactive ingredients are usually called excipients in pharmaceutical contexts. The main excipient that serves as a medium for conveying the active ingredient is usually called the vehicle. For example, petrolatum and mineral oil are common vehicles. The term 'inactive' should not, however, be misconstrued as meaning inert.
Pharmaceuticals
The dosage form for a pharmaceutical contains the active pharmaceutical ingredient, which is the drug substance itself, and excipients, which are the ingredients of the tablet, or the liquid in which the active agent is suspended, or other material that is pharmaceutically inert. Drugs are chosen primarily for their active ingredients. During formulation development, the excipients are chosen carefully so that the active ingredient can reach the target si
Document 2:::
C
Cadaverine
Caffeine
Calciferol (Vitamin D)
Calcitonin
Calmodulin
Calreticulin
Camphor - (C10H16O)
Cannabinol - (C21H26O2)
Capsaicin
Carbohydrase
Carbohydrate
Carnitine
Carrageenan
Carotinoid
Casein
Caspase
Catecholamine
Cellulase
Cellulose - (C6H10O5)x
Cerulenin
Cetrimonium bromide (Cetrimide) - C19H42BrN
Chelerythrine
Chromomycin A3
Chaparonin
Chitin
α-Chloralose
Chlorophyll
Cholecystokinin (CCK)
Cholesterol
Choline
Chondroitin sulfate
Cinnamaldehyde
Citral
Citric acid
Citrinin
Citronellal
Citronellol
Citrulline
Cobalamin (vitamin B12)
Coenzyme
Coenzyme Q
Colchicine
Collagen
Coniine
Corticosteroid
Corti
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are antacids comprised of?
A. solids
B. gases
C. acids
D. bases
Answer:
|
|
sciq-7895
|
multiple_choice
|
The fetal stage begins about how long after fertilization?
|
[
"three months",
"four months",
"two months",
"one month"
] |
C
|
Relavent Documents:
Document 0:::
A fetus or foetus (; : fetuses, feti, foetuses, or foeti) is the unborn offspring that develops from an animal embryo. Following embryonic development the fetal stage of development takes place. In human prenatal development, fetal development begins from the ninth week after fertilization (or eleventh week gestational age) and continues until birth. Prenatal development is a continuum, with no clear defining feature distinguishing an embryo from a fetus. However, a fetus is characterized by the presence of all the major body organs, though they will not yet be fully developed and functional and some not yet situated in their final anatomical location.
Etymology
The word fetus (plural fetuses or feti) is related to the Latin fētus ("offspring", "bringing forth", "hatching of young") and the Greek "φυτώ" to plant. The word "fetus" was used by Ovid in Metamorphoses, book 1, line 104.
The predominant British, Irish, and Commonwealth spelling is foetus, which has been in use since at least 1594. The spelling with -oe- arose in Late Latin, in which the distinction between the vowel sounds -oe- and -e- had been lost. This spelling is the most common in most Commonwealth nations, except in the medical literature, where the fetus is used. The more classical spelling fetus is used in Canada and the United States. In addition, fetus is now the standard English spelling throughout the world in medical journals. The spelling faetus was also used historically.
Development in humans
Weeks 9 to 16 (2 to 3.6 months)
In humans, the fetal stage starts nine weeks after fertilization. At the start of the fetal stage, the fetus is typically about in length from crown-rump, and weighs about 8 grams. The head makes up nearly half of the size of the fetus. Breathing-like movements of the fetus are necessary for the stimulation of lung development, rather than for obtaining oxygen. The heart, hands, feet, brain, and other organs are present, but are only at the beginning of developme
Document 1:::
In obstetrics, gestational age is a measure of the age of a pregnancy taken from the beginning of the woman's last menstrual period (LMP), or the corresponding age of the gestation as estimated by a more accurate method, if available. Such methods include adding 14 days to a known duration since fertilization (as is possible in in vitro fertilization), or by obstetric ultrasonography. The popularity of using this measure of pregnancy is largely due to convenience: menstruation is usually noticed, while there is generally no convenient way to discern when fertilization or implantation occurred.
Gestational age is contrasted with fertilization age which takes the date of fertilization as the start date of gestation. There are different approaches to defining the start of a pregnancy. This definition is unusual for saying that women become "pregnant" two weeks before having sex. The definition of pregnancy and the calculation of gestational age are also relevant in the context of the abortion debate and the beginning of human personhood.
Methods
According to American College of Obstetricians and Gynecologists, the main methods to calculate gestational age are:
Directly calculating the days since the beginning of the last menstrual period
Early obstetric ultrasound, comparing the size of an embryo or fetus to that of a reference group of pregnancies of known gestational age (such as calculated from last menstrual periods) and using the mean gestational age of other embryos or fetuses of the same size. If the gestational age as calculated from an early ultrasound is contradictory to the one calculated directly from the last menstrual period, it is still the one from the early ultrasound that is used for the rest of the pregnancy.
In case of in vitro fertilization, calculating days since oocyte retrieval or co-incubation and adding 14 days.
Gestational age can also be estimated by calculating days from ovulation if it was estimated from related signs or ovulati
Document 2:::
Prenatal development () includes the development of the embryo and of the fetus during a viviparous animal's gestation. Prenatal development starts with fertilization, in the germinal stage of embryonic development, and continues in fetal development until birth.
In human pregnancy, prenatal development is also called antenatal development. The development of the human embryo follows fertilization, and continues as fetal development. By the end of the tenth week of gestational age the embryo has acquired its basic form and is referred to as a fetus. The next period is that of fetal development where many organs become fully developed. This fetal period is described both topically (by organ) and chronologically (by time) with major occurrences being listed by gestational age.
The very early stages of embryonic development are the same in all mammals, but later stages of development, and the length of gestation varies.
Terminology
In the human:
Different terms are used to describe prenatal development, meaning development before birth. A term with the same meaning is the "antepartum" (from Latin ante "before" and parere "to give birth") Sometimes "antepartum" is however used to denote the period between the 24th/26th week of gestational age until birth, for example in antepartum hemorrhage.
The perinatal period (from Greek peri, "about, around" and Latin nasci "to be born") is "around the time of birth". In developed countries and at facilities where expert neonatal care is available, it is considered from 22 completed weeks (usually about 154 days) of gestation (the time when birth weight is normally 500 g) to 7 completed days after birth. In many of the developing countries the starting point of this period is considered 28 completed weeks of gestation (or weight more than 1000 g).
Fertilization
Fertilization marks the first germinal stage of embryonic development. When semen is released into the vagina, the spermatozoa travel through the cervix, along the bo
Document 3:::
The fetal pole is a thickening on the margin of the yolk sac of a fetus during pregnancy.
It is usually identified at six weeks with vaginal ultrasound and at six and a half weeks with abdominal ultrasound. However, it is not unheard of for the fetal pole to not be visible until about 9 weeks. The fetal pole may be seen at 2–4 mm crown-rump length (CRL).
Document 4:::
Prenatal perception is the study of the extent of somatosensory and other types of perception during pregnancy. In practical terms, this means the study of fetuses; none of the accepted indicators of perception are present in embryos. Studies in the field inform the abortion debate, along with certain related pieces of legislation in countries affected by that debate. As of 2022, there is no scientific consensus on whether a fetus can feel pain.
Prenatal hearing
Numerous studies have found evidence indicating a fetus's ability to respond to auditory stimuli. The earliest fetal response to a sound stimulus has been observed at 16 weeks' gestational age, while the auditory system is fully functional at 25–29 weeks' gestation. At 33–41 weeks' gestation, the fetus is able to distinguish its mother's voice from others.
Prenatal pain
The hypothesis that human fetuses are capable of perceiving pain in the first trimester has little support, although fetuses at 14 weeks may respond to touch. A multidisciplinary systematic review from 2005 found limited evidence that thalamocortical pathways begin to function "around 29 to 30 weeks' gestational age", only after which a fetus is capable of feeling pain.
In March 2010, the Royal College of Obstetricians and Gynecologists submitted a report, concluding that "Current research shows that the sensory structures are not developed or specialized enough to respond to pain in a fetus of less than 24 weeks",
The report specifically identified the anterior cingulate as the area of the cerebral cortex responsible for pain processing. The anterior cingulate is part of the cerebral cortex, which begins to develop in the fetus at week 26. A co-author of that report revisited the evidence in 2020, specifically the functionality of the thalamic projections into the cortical subplate, and posited "an immediate and unreflective pain experience...from as early as 12 weeks."
There is a consensus among developmental neurobiologists that the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The fetal stage begins about how long after fertilization?
A. three months
B. four months
C. two months
D. one month
Answer:
|
|
sciq-6235
|
multiple_choice
|
Which type of double bond has a sigma bond and a pi bond?
|
[
"Covalent Bonds",
"carbon-oxygen bond",
"sodium - oxygen bond",
"dioxide - oxygen bond"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, pi bonds (π bonds) are covalent chemical bonds, in each of which two lobes of an orbital on one atom overlap with two lobes of an orbital on another atom, and in which this overlap occurs laterally. Each of these atomic orbitals has an electron density of zero at a shared nodal plane that passes through the two bonded nuclei. This plane also is a nodal plane for the molecular orbital of the pi bond. Pi bonds can form in double and triple bonds but do not form in single bonds in most cases.
The Greek letter π in their name refers to p orbitals, since the orbital symmetry of the pi bond is the same as that of the p orbital when seen down the bond axis. One common form of this sort of bonding involves p orbitals themselves, though d orbitals also engage in pi bonding. This latter mode forms part of the basis for metal-metal multiple bonding.
Properties
Pi bonds are usually weaker than sigma bonds. The C-C double bond, composed of one sigma and one pi bond, has a bond energy less than twice that of a C-C single bond, indicating that the stability added by the pi bond is less than the stability of a sigma bond. From the perspective of quantum mechanics, this bond's weakness is explained by significantly less overlap between the component p-orbitals due to their parallel orientation. This is contrasted by sigma bonds which form bonding orbitals directly between the nuclei of the bonding atoms, resulting in greater overlap and a strong sigma bond.
Pi bonds result from overlap of atomic orbitals that are in contact through two areas of overlap. Pi bonds are more diffuse bonds than the sigma bonds. Electrons in pi bonds are sometimes referred to as pi electrons. Molecular fragments joined by a pi bond cannot rotate about that bond without breaking the pi bond, because rotation involves destroying the parallel orientation of the constituent p orbitals.
For homonuclear diatomic molecules, bonding π molecular orbitals have only the one nodal plane passing th
Document 1:::
In chemistry, sigma bonds (σ bonds) are the strongest type of covalent chemical bond. They are formed by head-on overlapping between atomic orbitals. Sigma bonding is most simply defined for diatomic molecules using the language and tools of symmetry groups. In this formal approach, a σ-bond is symmetrical with respect to rotation about the bond axis. By this definition, common forms of sigma bonds are s+s, pz+pz, s+pz and dz2+dz2 (where z is defined as the axis of the bond or the internuclear axis).
Quantum theory also indicates that molecular orbitals (MO) of identical symmetry actually mix or hybridize. As a practical consequence of this mixing of diatomic molecules, the wavefunctions s+s and pz+pz molecular orbitals become blended. The extent of this mixing (or hybridization or blending) depends on the relative energies of the MOs of like symmetry.
For homodiatomics (homonuclear diatomic molecules), bonding σ orbitals have no nodal planes at which the wavefunction is zero, either between the bonded atoms or passing through the bonded atoms. The corresponding antibonding, or σ* orbital, is defined by the presence of one nodal plane between the two bonded atoms.
Sigma bonds are the strongest type of covalent bonds due to the direct overlap of orbitals, and the electrons in these bonds are sometimes referred to as sigma electrons.
The symbol σ is the Greek letter sigma. When viewed down the bond axis, a σ MO has a circular symmetry, hence resembling a similarly sounding "s" atomic orbital.
Typically, a single bond is a sigma bond while a multiple bond is composed of one sigma bond together with pi or other bonds. A double bond has one sigma plus one pi bond, and a triple bond has one sigma plus two pi bonds.
Polyatomic molecules
Sigma bonds are obtained by head-on overlapping of atomic orbitals. The concept of sigma bonding is extended to describe bonding interactions involving overlap of a single lobe of one orbital with a single lobe of another. For example,
Document 2:::
In chemistry, a single bond is a chemical bond between two atoms involving two valence electrons. That is, the atoms share one pair of electrons where the bond forms. Therefore, a single bond is a type of covalent bond. When shared, each of the two electrons involved is no longer in the sole possession of the orbital in which it originated. Rather, both of the two electrons spend time in either of the orbitals which overlap in the bonding process. As a Lewis structure, a single bond is denoted as AːA or A-A, for which A represents an element. In the first rendition, each dot represents a shared electron, and in the second rendition, the bar represents both of the electrons shared in the single bond.
A covalent bond can also be a double bond or a triple bond. A single bond is weaker than either a double bond or a triple bond. This difference in strength can be explained by examining the component bonds of which each of these types of covalent bonds consists (Moore, Stanitski, and Jurs 393).
Usually, a single bond is a sigma bond. An exception is the bond in diboron, which is a pi bond. In contrast, the double bond consists of one sigma bond and one pi bond, and a triple bond consists of one sigma bond and two pi bonds (Moore, Stanitski, and Jurs 396). The number of component bonds is what determines the strength disparity. It stands to reason that the single bond is the weakest of the three because it consists of only a sigma bond, and the double bond or triple bond consist not only of this type of component bond but also at least one additional bond.
The single bond has the capacity for rotation, a property not possessed by the double bond or the triple bond. The structure of pi bonds does not allow for rotation (at least not at 298 K), so the double bond and the triple bond which contain pi bonds are held due to this property. The sigma bond is not so restrictive, and the single bond is able to rotate using the sigma bond as the axis of rotation (Moore, Stanits
Document 3:::
A triple bond in chemistry is a chemical bond between two atoms involving six bonding electrons instead of the usual two in a covalent single bond. Triple bonds are stronger than the equivalent single bonds or double bonds, with a bond order of three. The most common triple bond is in a nitrogen N2 molecule; the second most common is that between two carbon atoms, which can be found in alkynes. Other functional groups containing a triple bond are cyanides and isocyanides. Some diatomic molecules, such as dinitrogen and carbon monoxide, are also triple bonded. In skeletal formulae the triple bond is drawn as three parallel lines (≡) between the two connected atoms.
Bonding
The types of bonding can be explained in terms of orbital hybridization. In the case of acetylene each carbon atom has two sp-orbitals and two p-orbitals. The two sp-orbitals are linear with 180° angles and occupy the x-axis (cartesian coordinate system). The p-orbitals are perpendicular on the y-axis and the z-axis. When the carbon atoms approach each other, the sp orbitals overlap to form an sp-sp sigma bond. At the same time the pz-orbitals approach and together they form a pz-pz pi-bond. Likewise, the other pair of py-orbitals form a py-py pi-bond. The result is formation of one sigma bond and two pi bonds.
In the bent bond model, the triple bond can also formed by the overlapping of three sp3 lobes without the need to invoke a pi-bond.
Triple bonds between elements heavier than oxygen
Many elements beyond oxygen can form triple bonds. They are common for transition metals. Hexa(tert-butoxy)ditungsten(III) and Hexa(tert-butoxy)dimolybdenum(III) are well known examples. The M-M distance is about 233 pm. The W2 compound has attracted particular attention for its reactions with alkynes, leading to metal-carbon triple bonded compounds of the formula RC≡W(OBut)3
Document 4:::
In chemistry, a double bond is a covalent bond between two atoms involving four bonding electrons as opposed to two in a single bond. Double bonds occur most commonly between two carbon atoms, for example in alkenes. Many double bonds exist between two different elements: for example, in a carbonyl group between a carbon atom and an oxygen atom. Other common double bonds are found in azo compounds (N=N), imines (C=N), and sulfoxides (S=O). In a skeletal formula, a double bond is drawn as two parallel lines (=) between the two connected atoms; typographically, the equals sign is used for this. Double bonds were introduced in chemical notation by Russian chemist Alexander Butlerov.
Double bonds involving carbon are stronger and shorter than single bonds. The bond order is two. Double bonds are also electron-rich, which makes them potentially more reactive in the presence of a strong electron acceptor (as in addition reactions of the halogens).
Double bonds in alkenes
The type of bonding can be explained in terms of orbital hybridisation. In ethylene each carbon atom has three sp2 orbitals and one p-orbital. The three sp2 orbitals lie in a plane with ~120° angles. The p-orbital is perpendicular to this plane. When the carbon atoms approach each other, two of the sp2 orbitals overlap to form a sigma bond. At the same time, the two p-orbitals approach (again in the same plane) and together they form a pi bond. For maximum overlap, the p-orbitals have to remain parallel, and, therefore, rotation around the central bond is not possible. This property gives rise to cis-trans isomerism. Double bonds are shorter than single bonds because p-orbital overlap is maximized.
With 133 pm, the ethylene C=C bond length is shorter than the C−C length in ethane with 154 pm. The double bond is also stronger, 636 kJ mol−1 versus 368 kJ mol−1 but not twice as much as the pi-bond is weaker than the sigma bond due to less effective pi-overlap.
In an alternative representation, the doubl
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which type of double bond has a sigma bond and a pi bond?
A. Covalent Bonds
B. carbon-oxygen bond
C. sodium - oxygen bond
D. dioxide - oxygen bond
Answer:
|
|
sciq-144
|
multiple_choice
|
What are made of long chains consisting almost solely of carbon and hydrogen?
|
[
"proteins",
"lipids",
"nucleic acids",
"enzymes"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 3:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
Document 4:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are made of long chains consisting almost solely of carbon and hydrogen?
A. proteins
B. lipids
C. nucleic acids
D. enzymes
Answer:
|
|
scienceQA-2867
|
multiple_choice
|
What do these two changes have in common?
melting wax
picking up a paper clip with a magnet
|
[
"Both are caused by heating.",
"Both are chemical changes.",
"Both are only physical changes.",
"Both are caused by cooling."
] |
C
|
Step 1: Think about each change.
Melting wax is a change of state. So, it is a physical change. The wax changes from solid to liquid. But it is still made of the same type of matter.
Picking up a paper clip with a magnet is a physical change. The paper clip sticks to the magnet, but it is still made of the same type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Wax melting is caused by heating. But picking up a paper clip with a magnet is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
melting wax
picking up a paper clip with a magnet
A. Both are caused by heating.
B. Both are chemical changes.
C. Both are only physical changes.
D. Both are caused by cooling.
Answer:
|
sciq-2040
|
multiple_choice
|
Which feature in maps allows users to make corrections between magnetic north and true north?
|
[
"key",
"scale",
"inset",
"double compass rose"
] |
D
|
Relavent Documents:
Document 0:::
Burt's solar compass or astronomical compass/sun compass is a surveying instrument that makes use of the Sun's direction instead of magnetism. William Austin Burt invented his solar compass in 1835. The solar compass works on the principle that the direction to the Sun at a specified time can be calculated if the position of the observer on the surface of the Earth is known, to a similar precision. The direction can be described in terms of the angle of the Sun relative to the axis of rotation of the planet.
This angle is made up of the angle due to latitude, combined with the angle due to the season, and the angle due to the time of day. These angles are set on the compass for a chosen time of day, the compass base is set up level using the spirit levels provided, and then the sights are aligned with the Sun at the specified time, so the image of the Sun is projected onto the cross grating target. At this point the compass base will be aligned true north–south. It is then locked in place in this alignment, after which the sighting arms can be rotated to align with any landmark or beacon, and the direction can be read off the verniers as an angle relative to true north.
This device avoided the problems of the normal magnetic compass used by surveyors, which displayed erratic readings when in a locality of high iron ore content and inconsistent and unknown local magnetic variation. The instrument was found to be so accurate that it was the choice of the United States government when surveying public lands, state boundaries, and railroad routes. It won awards from various organizations and was used by surveyors from the nineteenth into the twentieth century.
History
Burt became a United States deputy surveyor in 1833 and began surveying government land for a territory northwest of the Ohio River. By 1834, he and his surveying crew were surveying territory in the lower peninsula of Michigan. He was surveying land in the upper peninsula of Michigan by 1835 to be use
Document 1:::
Engels Maps is a map company in the Ohio Valley with particular concentration on the Cincinnati-Dayton region. It also produces chamber of commerce maps.
Publications
It has three semi-annual publications that form its foundation:
Cincinnati Engels Guide
Dayton Engels Guide
Indianapolis Engels Guide
Their maps are also found in the Cincinnati Bell Yellow Pages and the Dayton WorkBook.
Corporate history
Engels Maps was founded by Judson Engels in 1994.
Sources
External links
Engels Maps
http://cincinnati.citysearch.com/profile/4343456/fort_thomas_ky/engels_maps_guide.html
Target Marketing
http://www.macraesbluebook.com/search/company.cfm?company=838024
http://engelsmaps.com engelsmaps.com
Geodesy
Companies based in Kentucky
Software companies based in Kentucky
American companies established in 1994
Map companies of the United States
Campbell County, Kentucky
1994 establishments in Kentucky
Software companies of the United States
Software companies established in 1994
Document 2:::
The compass is a magnetometer used for navigation and orientation that shows direction in regards to the geographic cardinal points. The structure of a compass consists of the compass rose, which displays the four main directions on it: East (E), South (S), West (W) and North (N). The angle increases in the clockwise position. North corresponds to 0°, so east is 90°, south is 180° and west is 270°.
The history of the compass started more than 2000 years ago during the Han dynasty (202 BC – 220 AD). The first compasses were made of lodestone, a naturally magnetized stone of iron, in Han dynasty China. It was called the "South Pointing Fish" and was used for land navigation by the mid-11th century during the Song dynasty (960–1279 AD). Shen Kuo provided the first explicit description of a magnetized needle in 1088 and Zhu Yu mentioned its use in maritime navigation in the text Pingzhou Table Talks, dated 1111–1117. Later compasses were made of iron needles, magnetized by striking them with a lodestone. Magnetized needles and compasses were first described in medieval Europe by the English theologian Alexander Neckam (1157–1217 AD). The first usage of a compass in Western Europe was recorded in around 1190 and in the Islamic world 1232. Dry compasses begin appearing around 1269 in Medieval Europe and 1300 in the Medieval Islamic world. This was replaced in the early 20th century by the liquid-filled magnetic compass.
Navigation prior to the compass
Before the introduction of the compass, geographical position and direction at sea were primarily determined by the sighting of landmarks, supplemented with the observation of the position of celestial bodies. Other techniques included sampling mud from the seafloor (China), analyzing the flight path of birds, and observing wind, sea debris, and sea state (Polynesia and elsewhere). Objects that have been understood as having been used for navigation by measuring the angles between celestial objects, were discovered in th
Document 3:::
Traverse is a method in the field of surveying to establish control networks. It is also used in geodesy. Traverse networks involve placing survey stations along a line or path of travel, and then using the previously surveyed points as a base for observing the next point. Traverse networks have many advantages, including:
Less reconnaissance and organization needed;
While in other systems, which may require the survey to be performed along a rigid polygon shape, the traverse can change to any shape and thus can accommodate a great deal of different terrains;
Only a few observations need to be taken at each station, whereas in other survey networks a great deal of angular and linear observations need to be made and considered;
Traverse networks are free of the strength of figure considerations that happen in triangular systems;
Scale error does not add up as the traverse is performed. Azimuth swing errors can also be reduced by increasing the distance between stations.
The traverse is more accurate than triangulateration (a combined function of the triangulation and trilateration practice).
Types
Frequently in surveying engineering and geodetic science, control points (CP) are setting/observing distance and direction (bearings, angles, azimuths, and elevation). The CP throughout the control network may consist of monuments, benchmarks, vertical control, etc. There are mainly two types of traverse:
Closed traverse: either originates from a station and returns to the same station completing a circuit, or runs between two known stations
Open traverse: neither returns to its starting station, nor closes on any other known station.
Compound traverse: it is where an open traverse is linked at its ends to an existing traverse to form a closed traverse. The closing line may be defined by coordinates at the end points wh
Document 4:::
Land navigation is the discipline of following a route through unfamiliar terrain on foot or by vehicle, using maps with reference to terrain, a compass, and other navigational tools. It is distinguished from travel by traditional groups, such as the Tuareg across the Sahara and the Inuit across the Arctic, who use subtle cues to travel across familiar, yet minimally differentiated terrain.
Land navigation is a core military discipline, which uses courses or routes that are an essential part of military training. Often, these courses are several miles long in rough terrain and are performed under adverse conditions, such as at night or in the rain.
In the late 19th century, land navigation developed into the sport of orienteering. The earliest use of the term 'orienteering' appears to be in 1886. Nordic military garrisons began orienteering competitions in 1895.
United States
In the United States military, land navigation courses are required for the Marine Corps and the Army. Air Force escape and evasion training includes aspects of land navigation. Army Training Circular 3-25.26 is devoted to land navigation.
See also
History of orienteering
Navigation
Piloting
Wayfinding
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which feature in maps allows users to make corrections between magnetic north and true north?
A. key
B. scale
C. inset
D. double compass rose
Answer:
|
|
sciq-2465
|
multiple_choice
|
The bones of the newborn skull are not fully ossified and are separated by large areas called what?
|
[
"fissures",
"pores",
"sutures",
"fontanelles"
] |
D
|
Relavent Documents:
Document 0:::
Flat bones are bones whose principal function is either extensive protection or the provision of broad surfaces for muscular attachment. These bones are expanded into broad, flat plates, as in the cranium (skull), the ilium (pelvis), sternum and the rib cage.
The flat bones are: the occipital, parietal, frontal, nasal, lacrimal, vomer, sternum, ribs, and scapulae.
These bones are composed of two thin layers of compact bone enclosing between them a variable quantity of cancellous bone, which is the location of red bone marrow. In an adult, most red blood cells are formed in flat bones. In the cranial bones, the layers of compact tissue are familiarly known as the tables of the skull; the outer one is thick and tough; the inner is thin, dense, and brittle, and hence is termed the vitreous (glass-like) table. The intervening cancellous tissue is called the diploë, and this, in the nasal region of the skull, becomes absorbed so as to leave spaces filled with air–the paranasal sinuses between the two tables.
Ossification in flat bones
Ossification is started by the formation of layers of undifferentiated connective tissue that hold the area where the flat bone is to come. On a baby, those spots are known as fontanelles. The fontanelles contain connective tissue stem cells, which form into osteoblasts, which secrete calcium phosphate into a matrix of canals. They form a ring in between the membranes, and begin to expand outwards. As they expand they make a bony matrix.
This hardened matrix forms the body of the bone. Since flat bones are usually thinner than the long bones, they only have red bone marrow, rather than both red and yellow bone marrow (yellow bone marrow being made up of mostly fat). The bone marrow fills the space in the ring of osteoblasts, and eventually fills the bony matrix.
After the bone is completely ossified, the osteoblasts retract their calcium phosphate secreting tendrils, leaving tiny canals in the bony matrix, known as canaliculi. These
Document 1:::
The skull is a bone protective cavity for the brain. The skull is composed of four types of bone i.e., cranial bones, facial bones, ear ossicles and hyoid bone. However two parts are more prominent: the cranium (: craniums or crania) and the mandible. In humans, these two parts are the neurocranium (braincase) and the viscerocranium (facial skeleton) that includes the mandible as its largest bone. The skull forms the anterior-most portion of the skeleton and is a product of cephalisation—housing the brain, and several sensory structures such as the eyes, ears, nose, and mouth. In humans these sensory structures are part of the facial skeleton.
Functions of the skull include protection of the brain, fixing the distance between the eyes to allow stereoscopic vision, and fixing the position of the ears to enable sound localisation of the direction and distance of sounds. In some animals, such as horned ungulates (mammals with hooves), the skull also has a defensive function by providing the mount (on the frontal bone) for the horns.
The English word skull is probably derived from Old Norse , while the Latin word comes from the Greek root (). The human skull fully develops two years after birth.The junctions of the skull bones are joined by structures called sutures.
The skull is made up of a number of fused flat bones, and contains many foramina, fossae, processes, and several cavities or sinuses. In zoology there are openings in the skull called fenestrae.
Structure
Humans
The human skull is the bone structure that forms the head in the human skeleton. It supports the structures of the face and forms a cavity for the brain. Like the skulls of other vertebrates, it protects the brain from injury.
The skull consists of three parts, of different embryological origin—the neurocranium, the sutures, and the facial skeleton (also called the membraneous viscerocranium). The neurocranium (or braincase) forms the protective cranial cavity that surrounds and houses the
Document 2:::
A fontanelle (or fontanel) (colloquially, soft spot) is an anatomical feature of the infant human skull comprising soft membranous gaps (sutures) between the cranial bones that make up the calvaria of a fetus or an infant. Fontanelles allow for stretching and deformation of the neurocranium both during birth and later as the brain expands faster than the surrounding bone can grow. Premature complete ossification of the sutures is called craniosynostosis.
After infancy, the anterior fontanelle is known as the bregma.
Structure
An infant's skull consists of five main bones: two frontal bones, two parietal bones, and one occipital bone. These are joined by fibrous sutures, which allow movement that facilitates childbirth and brain growth.
Posterior fontanelle is triangle-shaped. It lies at the junction between the sagittal suture and lambdoid suture. At birth, the skull features a small posterior fontanelle with an open area covered by a tough membrane, where the two parietal bones adjoin the occipital bone (at the lambda). The posterior fontanelles ossify within 6–8 weeks after birth. This is called intramembranous ossification. The mesenchymal connective tissue turns into bone tissue.
Anterior fontanelle is a diamond-shaped membrane-filled space located between the two frontal and two parietal bones of the developing fetal skull. It persists until approximately 18 months after birth. It is at the junction of the coronal suture and sagittal suture. The fetal anterior fontanelle may be palpated until 18 months. In cleidocranial dysostosis, however, it is often late in closing at 8–24 months or may never close. Examination of an infant includes palpating the anterior fontanelle.
Two smaller fontanelles are located on each side of the head, more anteriorly the sphenoidal or anterolateral fontanelle (between the sphenoid, parietal, temporal, and frontal bones) and more posteriorly the mastoid or posterolateral fontanelle (between the temporal, occipital, and parie
Document 3:::
In anatomy, a process () is a projection or outgrowth of tissue from a larger body. For instance, in a vertebra, a process may serve for muscle attachment and leverage (as in the case of the transverse and spinous processes), or to fit (forming a synovial joint), with another vertebra (as in the case of the articular processes). The word is also used at the microanatomic level, where cells can have processes such as cilia or pedicels. Depending on the tissue, processes may also be called by other terms, such as apophysis, tubercle, or protuberance.
Examples
Examples of processes include:
The many processes of the human skull:
The mastoid and styloid processes of the temporal bone
The zygomatic process of the temporal bone
The zygomatic process of the frontal bone
The orbital, temporal, lateral, frontal, and maxillary processes of the zygomatic bone
The anterior, middle, and posterior clinoid processes and the petrosal process of the sphenoid bone
The uncinate process of the ethmoid bone
The jugular process of the occipital bone
The alveolar, frontal, zygomatic, and palatine processes of the maxilla
The ethmoidal and maxillary processes of the inferior nasal concha
The pyramidal, orbital, and sphenoidal processes of the palatine bone
The coronoid and condyloid processes of the mandible
The xiphoid process at the end of the sternum
The acromion and coracoid processes of the scapula
The coronoid process of the ulna
The radial and ulnar styloid processes
The uncinate processes of ribs found in birds and reptiles
The uncinate process of the pancreas
The spinous, articular, transverse, accessory, uncinate, and mammillary processes of the vertebrae
The trochlear process of the heel
The appendix, which is sometimes called the "vermiform process", notably in Gray's Anatomy
The olecranon process of the ulna
See also
Eminence
Tubercle
Appendage
Pedicle of vertebral arch
Notes
Document 4:::
The ethmoidal notch separates the two orbital plates; it is quadrilateral, and filled, in the articulated skull, by the cribriform plate of the ethmoid.
The margins of the notch present several half-cells which, when united with corresponding half-cells on the upper surface of the ethmoid, complete the ethmoidal sinuses.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The bones of the newborn skull are not fully ossified and are separated by large areas called what?
A. fissures
B. pores
C. sutures
D. fontanelles
Answer:
|
|
sciq-159
|
multiple_choice
|
What do you call the horizontal stems of a strawberry plant that run over the ground surface?
|
[
"stolons",
"sprouts",
"root volunteers",
"climbing vines"
] |
A
|
Relavent Documents:
Document 0:::
Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb.
Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage.
Modified stems
Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers.
Detailed description of edible plant stems
Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the
Bamboo The edible portion is the young shoot (culm).
Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods.
Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves.
Cauliflower The edible portion is proliferated peduncle and flower tissue.
Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice.
Fig The edible portion is stem tissue. The
Document 1:::
In botany, the receptacle refers to vegetative tissues near the end of reproductive stems that are situated below or encase the reproductive organs.
Angiosperms
In angiosperms, the receptacle or torus (an older term is thalamus, as in Thalamiflorae) is the thickened part of a stem (pedicel) from which the flower organs grow. In some accessory fruits, for example the pome and strawberry, the receptacle gives rise to the edible part of the fruit. The fruit of Rubus species is a cluster of drupelets on top of a conical receptacle. When a raspberry is picked, the receptacle separates from the fruit, but in blackberries, it remains attached to the fruit.
In the Daisy family (Compositae or Asteraceae), small individual flowers are arranged on a round or dome-like structure that is also called receptacle.
Algae and bryophyta
In phycology, receptacles occur at the ends of branches of algae mainly in the brown algae or Heterokontophyta in the Order Fucales. They are specialised structures which contain the reproductive organs called conceptacles. Receptacles also function as a structure that captures food.
Document 2:::
A whip is a slender, unbranched shoot or plant. This term is used typically in forestry to refer to unbranched young tree seedlings of approximately 0.5-1.0 m (1 ft 7 in-3 ft 3 in) in height and 2–3 years old, that have been grown for planting out.
Document 3:::
Tubers are a type of enlarged structure used as storage organs for nutrients in some plants. They are used for the plant's perennation (survival of the winter or dry months), to provide energy and nutrients for regrowth during the next growing season, and as a means of asexual reproduction. Stem tubers form thickened rhizomes (underground stems) or stolons (horizontal connections between organisms); well known species with stem tubers include the potato and yam. Some writers also treat modified lateral roots (root tubers) under the definition; these are found in sweet potatoes, cassava, and dahlias.
Terminology
The term originates from the Latin , meaning "lump, bump, swelling".
Some writers define the term "tuber" to mean only structures derived from stems; others use the term for structures derived from stems or roots.
Stem tubers
A stem tuber forms from thickened rhizomes or stolons. The top sides of the tuber produce shoots that grow into typical stems and leaves and the undersides produce roots. They tend to form at the sides of the parent plant and are most often located near the soil surface. The underground tuber is normally a short-lived storage and regenerative organ developing from a shoot that branches off a mature plant. The offspring or new tubers are attached to a parent tuber or form at the end of a hypogeogenous (initiated below ground) rhizome. In the autumn the plant dies, except for the new offspring tubers, which have one dominant bud that in spring regrows a new shoot producing stems and leaves; in summer the tubers decay and new tubers begin to grow. Some plants also form smaller tubers or tubercules that act like seeds, producing small plants that resemble (in morphology and size) seedlings. Some stem tubers are long-lived, such as those of tuberous begonias, but many plants have tubers that survive only until the plants have fully leafed out, at which point the tuber is reduced to a shriveled-up husk.
Stem tubers generally start off as
Document 4:::
Offshoots are lateral shoots that are produced on the main stem of a plant. They may be known colloquially as "suckers", “pups” or “sister plants”
See also
Stolon or runners
Plant anatomy
Plant morphology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call the horizontal stems of a strawberry plant that run over the ground surface?
A. stolons
B. sprouts
C. root volunteers
D. climbing vines
Answer:
|
|
sciq-2739
|
multiple_choice
|
What is the name of anything that has mass and takes up space?
|
[
"carbon",
"depth",
"matter",
"solid"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A proof mass or test mass is a known quantity of mass used in a measuring instrument as a reference for the measurement of an unknown quantity.
A mass used to calibrate a weighing scale is sometimes called a calibration mass or calibration weight.
A proof mass that deforms a spring in an accelerometer is sometimes called the seismic mass. In a convective accelerometer, a fluid proof mass may be employed.
See also
Calibration, checking or adjustment by comparison with a standard
Control variable, the experimental element that is constant and unchanged throughout the course of a scientific investigation
Test particle, an idealized model of an object in which all physical properties are assumed to be negligible, except for the property being studied
Document 3:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 4:::
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.
Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space.
For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea tha
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of anything that has mass and takes up space?
A. carbon
B. depth
C. matter
D. solid
Answer:
|
|
scienceQA-8540
|
multiple_choice
|
What do these two changes have in common?
tearing a piece of paper
pouring milk on oatmeal
|
[
"Both are caused by heating.",
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by cooling."
] |
B
|
Step 1: Think about each change.
Tearing a piece of paper is a physical change. The paper tears into pieces. But each piece is still made of paper.
Pouring milk on oatmeal is a physical change. The oatmeal and milk form a creamy mixture. But making this mixture does not form a different type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
tearing a piece of paper
pouring milk on oatmeal
A. Both are caused by heating.
B. Both are only physical changes.
C. Both are chemical changes.
D. Both are caused by cooling.
Answer:
|
sciq-3633
|
multiple_choice
|
What term describes a gene or sequence on a chromosome that co-segregates (shows genetic linkage) with a specific trait?
|
[
"autosomal",
"analogous effect",
"genetic marker",
"nucleic acid chain"
] |
C
|
Relavent Documents:
Document 0:::
In genetics, a locus (: loci) is a specific, fixed position on a chromosome where a particular gene or genetic marker is located. Each chromosome carries many genes, with each gene occupying a different position or locus; in humans, the total number of protein-coding genes in a complete haploid set of 23 chromosomes is estimated at 19,000–20,000.
Genes may possess multiple variants known as alleles, and an allele may also be said to reside at a particular locus. Diploid and polyploid cells whose chromosomes have the same allele at a given locus are called homozygous with respect to that locus, while those that have different alleles at a given locus are called heterozygous. The ordered list of loci known for a particular genome is called a gene map. Gene mapping is the process of determining the specific locus or loci responsible for producing a particular phenotype or biological trait. Association mapping, also known as "linkage disequilibrium mapping", is a method of mapping quantitative trait loci (QTLs) that takes advantage of historic linkage disequilibrium to link phenotypes (observable characteristics) to genotypes (the genetic constitution of organisms), uncovering genetic associations.
Nomenclature
The shorter arm of a chromosome is termed the p arm or p-arm, while the longer arm is the q arm or q-arm. The chromosomal locus of a typical gene, for example, might be written 3p22.1, where:
3 = chromosome 3
p = p-arm
22 = region 2, band 2 (read as "two, two", not "twenty-two")
1 = sub-band 1
Thus the entire locus of the example above would be read as "three P two two point one". The cytogenetic bands are areas of the chromosome either rich in actively-transcribed DNA (euchromatin) or packaged DNA (heterochromatin). They appear differently upon staining (for example, euchromatin appears white and heterochromatin appears black on Giemsa staining). They are counted from the centromere out toward the telomeres.
A range of loci is specified in a similar wa
Document 1:::
In biology, the word gene (from , ; meaning generation or birth or gender) can have several different meanings. The Mendelian gene is a basic unit of heredity and the molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes.
During gene expression, the DNA is first copied into RNA. The RNA can be directly functional or be the intermediate template for a protein that performs a function. (Some viruses have an RNA genome so the genes are made of RNA that may function directly without being copied into RNA. This is an exception to the strict definition of a gene described above.)
The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. These genes make up different DNA sequences called genotypes. Genotypes along with environmental and developmental factors determine what the phenotypes will be. Most biological traits are under the influence of polygenes (many different genes) as well as gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, and some are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life.
A gene can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Usage of the term "having a gene" (e.g., "good genes," "hair color gene") typically refers to containing a different allele of the same, shared gene. Genes evolve due to natural selection / survival of the fittest and genetic drift of the alleles.
The term gene was introduced by Danish botanist, plant physiologist and geneticist Wilhelm Johannsen in 1909. It is inspired by the Ancient Greek: γόνος, gonos, that means offspring and procreation
Document 2:::
Genetic linkage is the tendency of DNA sequences that are close together on a chromosome to be inherited together during the meiosis phase of sexual reproduction. Two genetic markers that are physically near to each other are unlikely to be separated onto different chromatids during chromosomal crossover, and are therefore said to be more linked than markers that are far apart. In other words, the nearer two genes are on a chromosome, the lower the chance of recombination between them, and the more likely they are to be inherited together. Markers on different chromosomes are perfectly unlinked, although the penetrance of potentially deleterious alleles may be influenced by the presence of other alleles, and these other alleles may be located on other chromosomes than that on which a particular potentially deleterious allele is located.
Genetic linkage is the most prominent exception to Gregor Mendel's Law of Independent Assortment. The first experiment to demonstrate linkage was carried out in 1905. At the time, the reason why certain traits tend to be inherited together was unknown. Later work revealed that genes are physical structures related by physical distance.
The typical unit of genetic linkage is the centimorgan (cM). A distance of 1 cM between two markers means that the markers are separated to different chromosomes on average once per 100 meiotic product, thus once per 50 meioses.
Discovery
Gregor Mendel's Law of Independent Assortment states that every trait is inherited independently of every other trait. But shortly after Mendel's work was rediscovered, exceptions to this rule were found. In 1905, the British geneticists William Bateson, Edith Rebecca Saunders and Reginald Punnett cross-bred pea plants in experiments similar to Mendel's. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). T
Document 3:::
Major gene is a gene with pronounced phenotype expression, in contrast to a modifier gene. Major gene characterizes common expression of oligogenic series, i.e. a small number of genes that determine the same trait.
Major genes control the discontinuous or qualitative characters in contrast of minor genes or polygenes with individually small effects. Major genes segregate and may be easily subject to mendelian analysis. The gene categorization into major and minor determinants is more or less arbitrary. Both of the two types are in all probability only end points in a more or less continuous series of gene action and gene interactions.
The term major gene was introduced into the science of inheritance by Keneth Mather (1941).
See also
Gene interaction
Minor gene
Gene
Document 4:::
In genetics, dominance is the phenomenon of one variant (allele) of a gene on a chromosome masking or overriding the effect of a different variant of the same gene on the other copy of the chromosome. The first variant is termed dominant and the second is called recessive. This state of having two different variants of the same gene on each chromosome is originally caused by a mutation in one of the genes, either new (de novo) or inherited. The terms autosomal dominant or autosomal recessive are used to describe gene variants on non-sex chromosomes (autosomes) and their associated traits, while those on sex chromosomes (allosomes) are termed X-linked dominant, X-linked recessive or Y-linked; these have an inheritance and presentation pattern that depends on the sex of both the parent and the child (see Sex linkage). Since there is only one copy of the Y chromosome, Y-linked traits cannot be dominant or recessive. Additionally, there are other forms of dominance, such as incomplete dominance, in which a gene variant has a partial effect compared to when it is present on both chromosomes, and co-dominance, in which different variants on each chromosome both show their associated traits.
Dominance is a key concept in Mendelian inheritance and classical genetics. Letters and Punnett squares are used to demonstrate the principles of dominance in teaching, and the use of upper-case letters for dominant alleles and lower-case letters for recessive alleles is a widely followed convention. A classic example of dominance is the inheritance of seed shape in peas. Peas may be round, associated with allele R, or wrinkled, associated with allele r. In this case, three combinations of alleles (genotypes) are possible: RR, Rr, and rr. The RR (homozygous) individuals have round peas, and the rr (homozygous) individuals have wrinkled peas. In Rr (heterozygous) individuals, the R allele masks the presence of the r allele, so these individuals also have round peas. Thus, allele R is d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term describes a gene or sequence on a chromosome that co-segregates (shows genetic linkage) with a specific trait?
A. autosomal
B. analogous effect
C. genetic marker
D. nucleic acid chain
Answer:
|
|
sciq-1252
|
multiple_choice
|
Rainwater absorbs carbon dioxide (co 2 ) as it falls. the co 2 combines with water to form what?
|
[
"carbonic acid",
"methane gas",
"carbon monoxide",
"nitrate acid"
] |
A
|
Relavent Documents:
Document 0:::
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i
Document 1:::
Carbon sequestration (or carbon storage) is the process of storing carbon in a carbon pool. Carbon sequestration is a naturally occurring process but it can also be enhanced or achieved with technology, for example within carbon capture and storage projects. There are two main types of carbon sequestration: geologic and biologic (also called biosequestration).
Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These changes can be accelerated through changes in land use and agricultural practices, such as converting crop land into land for non-crop fast growing plants. Artificial processes have been devised to produce similar effects, including large-scale, artificial capture and sequestration of industrially produced using subsurface saline aquifers or aging oil fields. Other technologies that work with carbon sequestration include bio-energy with carbon capture and storage, biochar, enhanced weathering, direct air carbon capture and sequestration (DACCS).
Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere. Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts (mineral sequestration). These methods are considered non-volatile because they remove carbon from the atmosphere and sequester it indefinitely and presumably for a considerable duration (thousands to millions of years).
To enhance carbon sequestration processes in oceans the following technologies have been proposed but none have achieved lar
Document 2:::
Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that increase the surface area available for adsorption (which is not the same as absorption) or chemical reactions. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active.
Due to its high degree of microporosity, one gram of activated carbon has a surface area in excess of as determined by gas adsorption. Charcoal, before activation, has a specific surface area in the range of . An activation level sufficient for useful application may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties.
Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being 'activated'. When derived from coal it is referred to as activated coal. Activated coke is derived from coke.
Uses
Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications.
Industrial
One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks.
To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change.
Main compartments
The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Rainwater absorbs carbon dioxide (co 2 ) as it falls. the co 2 combines with water to form what?
A. carbonic acid
B. methane gas
C. carbon monoxide
D. nitrate acid
Answer:
|
|
sciq-8677
|
multiple_choice
|
What are hurricanes called in the pacific?
|
[
"tornados",
"twisters",
"rainstorms",
"typhoons"
] |
D
|
Relavent Documents:
Document 0:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Wind setup, also known as wind effect or storm effect, refers to the rise in water level in seas or lakes caused by winds pushing the water in a specific direction. As the wind moves across the water's surface, it applies a shear stress to the water, prompting the formation of a wind-driven current. When this current encounters a shoreline, the water level along the shore increases, generating a hydrostatic counterforce in equilibrium with the shear force.
During a storm, wind setup is a component of the overall storm surge. For instance, in The Netherlands, the wind setup during a storm surge can elevate water levels by approximately 3 metres above the normal tide. In the case of cyclones, the wind setup can reach up to 5 metres. This can result in a significant rise in water levels, particularly when the water is forced into a shallow, funnel-shaped area.
Observation
In lakes, water level fluctuations are typically attributed to wind setup. This effect is particularly noticeable in lakes with well-regulated water levels, where the wind setup can be clearly observed. By comparing this with the wind over the lake, the relationship between wind speed, water depth, and fetch length can be accurately determined. This is especially feasible in lakes where water depth remains fairly consistent, such as the IJsselmeer.
At sea, wind setup is usually not directly observable, as the observed water level is a combination of both the tide and the wind setup. To isolate the wind setup, the (calculated) astronomical tide must be subtracted from the observed water level. For example, during the North Sea flood of 1953 at the Vlissingen tidal station (see image), the highest water level along the Dutch coast was recorded at 2.79 metres, but this was not the location of the highest wind setup, which was observed at Scheveningen with a measurement of 3.52 metres.
Notably, the highest wind setup ever recorded in the Netherlands (3.63 metres) was in Dintelsas, Steenbergen in 195
Document 4:::
Storm spotting is a form of weather spotting in which observers watch for the approach of severe weather, monitor its development and progression, and actively relay their findings to local authorities.
History
Storm spotting developed in the United States during the early 1940s. A joint project between the military and the weather bureau saw the deployment of trained military and aviation lightning spotters in areas where ammunitions for the war were manufactured. During 1942, a serious tornado struck a key operations center in Oklahoma and another tornado on May 15, 1943 destroyed parts of the Fort Riley military base located in Kansas. After these two events and a string of other tornado outbreaks, spotter networks became commonplace, and it is estimated that there were over 200 networks by 1945. Their mandate had also changed to include reporting all types of active or severe weather; this included giving snow depth and other reports during the winter as well as fire reports in the summer, along with the more typical severe weather reports associated with thunderstorms. However, spotting was still mainly carried out by trained individuals in either the military, aviation, or law enforcement fields of service. It was not until 1947 that volunteer spotting, as it exists today, was born.
After a series of vicious tornado outbreaks hit the state of Texas in 1947, the state placed special emphasis on volunteer spotting, and the local weather offices began to offer basic training classes to the general public. Spotting required the delivery of timely information so that warnings could be issued as quickly as possible, thus civilian landline phone calls and amateur radio operators provided the most efficient and fastest means of communication. While phone lines were reliable to a degree, a common problem was the loss of service when an approaching storm damaged phone lines in its path. This eventually led to amateur radio becoming the predominant means of communicat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are hurricanes called in the pacific?
A. tornados
B. twisters
C. rainstorms
D. typhoons
Answer:
|
|
sciq-5857
|
multiple_choice
|
In how many ways ca a living organism obtain chemical energy
|
[
"seven",
"four",
"three",
"two"
] |
D
|
Relavent Documents:
Document 0:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In how many ways ca a living organism obtain chemical energy
A. seven
B. four
C. three
D. two
Answer:
|
|
sciq-2297
|
multiple_choice
|
In the body, what essential substance is pumped from the heart into arteries and then eventually into capillaries?
|
[
"blood",
"lymphatic Fluid",
"water",
"spinal fluid"
] |
A
|
Relavent Documents:
Document 0:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 1:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 2:::
In haemodynamics, the body must respond to physical activities, external temperature, and other factors by homeostatically adjusting its blood flow to deliver nutrients such as oxygen and glucose to stressed tissues and allow them to function. Haemodynamic response (HR) allows the rapid delivery of blood to active neuronal tissues. The brain consumes large amounts of energy but does not have a reservoir of stored energy substrates. Since higher processes in the brain occur almost constantly, cerebral blood flow is essential for the maintenance of neurons, astrocytes, and other cells of the brain. This coupling between neuronal activity and blood flow is also referred to as neurovascular coupling.
Vascular anatomy overview
In order to understand how blood is delivered to cranial tissues, it is important to understand the vascular anatomy of the space itself. Large cerebral arteries in the brain split into smaller arterioles, also known as pial arteries. These consist of endothelial cells and smooth muscle cells, and as these pial arteries further branch and run deeper into the brain, they associate with glial cells, namely astrocytes. The intracerebral arterioles and capillaries are unlike systemic arterioles and capillaries in that they do not readily allow substances to diffuse through them; they are connected by tight junctions in order to form the blood brain barrier (BBB). Endothelial cells, smooth muscle, neurons, astrocytes, and pericytes work together in the brain order to maintain the BBB while still delivering nutrients to tissues and adjusting blood flow in the intracranial space to maintain homeostasis. As they work as a functional neurovascular unit, alterations in their interactions at the cellular level can impair HR in the brain and lead to deviations in normal nervous function.
Mechanisms
Various cell types play a role in HR, including astrocytes, smooth muscle cells, endothelial cells of blood vessels, and pericytes. These cells control whether th
Document 3:::
The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. Endothelial cells form the barrier between vessels and tissue and control the flow of substances and fluid into and out of a tissue.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between v
Document 4:::
Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels.
Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm.
Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics.
The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology.
Blood
Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids.
Viscosity of plasma
Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water; a 5 °C increase of temperature in the physiological range reduces plasma viscosity by about 10%.
Osmotic pressure of plasma
The osmotic pressure of solution is determined by the number of particles present
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In the body, what essential substance is pumped from the heart into arteries and then eventually into capillaries?
A. blood
B. lymphatic Fluid
C. water
D. spinal fluid
Answer:
|
|
scienceQA-6445
|
multiple_choice
|
How long is a leather belt?
|
[
"26 feet",
"26 yards",
"26 inches",
"26 miles"
] |
C
|
The best estimate for the length of a leather belt is 26 inches.
26 feet, 26 yards, and 26 miles are all too long.
|
Relavent Documents:
Document 0:::
is a world mathematics certification program and examination established in Japan in 1988.
Outline of Suken
Each Suken level (Kyu) has two sections. Section 1 is calculation and Section 2 is application.
Passing Rate
In order to pass the Suken, you must correctly answer approximately 70% of section 1 and approximately 60% of section 2.
Levels
Level 5 (7th grade math)
The examination time is 180 minutes for section 1, 60 minutes for section 2.
Level 4 (8th grade)
The examination time is 60 minutes for section 1, 60 minutes for section 2.
3rd Kyu, suits for 9th grade
The examination time is 60 minutes for section 1, 60 minutes for section 2.
Levels 5 - 3 include the following subjects:
Calculation with negative numbers
Inequalities
Simultaneous equations
Congruency and similarities
Square roots
Factorization
Quadratic equations and functions
The Pythagorean theorem
Probabilities
Level pre-2 (10th grade)
The examination time is 60 minutes for section 1, 90 minutes for section 2.
Level 2 (11th grade)
The examination time is 60 minutes for section 1, 90 minutes for section 2.
Level pre-1st (12th grade)
The examination time is 60 minutes for section 1, 120 minutes for section 2.
Levels pre-2 - pre-1 include the following subjects:
Quadratic functions
Trigonometry
Sequences
Vectors
Complex numbers
Basic calculus
Matrices
Simple curved lines
Probability
Level 1 (undergrad and graduate)
The examination time is 60 minutes for section 1, 120 minutes for section 2.
Level 1 includes the following subjects:
Linear algebra
Vectors
Matrices
Differential equations
Statistics
Probability
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Dragon silk is a material created by Kraig Biocraft Laboratories of Ann Arbor, Michigan from genetically modified silkworms to create body armor. Dragon silk combines the elasticity and strength of spider silk. It has the tensile strength as high as 1.79 gigapascals (as much as 37%) and the elasticity above 38% exceeding the maximum reported features of the spider silk. It is reported that dragon silk is more flexible than the Monster silk and stronger than the "Big Red, recombinant spider silk designed for increased strength.
Properties
Mechanical properties
Dragon silk has properties higher than that of any other fiber ever noticed.
Tensile Strength
In comparison, Dragon silk's tensile strength is higher than that of steel(450-2000 MPa's). In a report it is said that the strength of Dragon silk is as high as 1.79 GPa's which is 37% higher than the widely reported spider silk. Its tensile strength is higher than the "Big Red silk," which had been reported as the strongest fiber ever made. "Bid Red Silk" was developed in the same Laboratories as Dragon Silk.
Flexibility
Dragon silk is far more flexible than Kevlar(the material used by US Army to develop body armor). Its flexibility is 38% higher than normal Spider silk and is noticeably more flexible than the "Monster silk" from the same lab. In percentage, Kevlar's flexibility is 3% and Dragon silk's flexibility is 30% to 40%.
History
In 2010, the scientists discovered the first spider silk, which was a great achievement, as it is one of the strongest natural fiber. But the problem was that spiders are cannibalistic and territorial, so it is impossible to create a cost-effective spider farm. To overcome this problem, scientists at Kraig Labs developed a method for making spider silk from silkworms. In 2011, Malcolm J. Fraser, Donald L. Jarvis and their colleagues published a study in which they describe how they remove silkworm silk making protein and replaced it with the spiders protein to build unique
Document 4:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a leather belt?
A. 26 feet
B. 26 yards
C. 26 inches
D. 26 miles
Answer:
|
sciq-5940
|
multiple_choice
|
What keeps glaciers from forming in water?
|
[
"movement",
"warmth",
"skin",
"salt"
] |
B
|
Relavent Documents:
Document 0:::
Blood Falls is an outflow of an iron oxide–tainted plume of saltwater, flowing from the tongue of Taylor Glacier onto the ice-covered surface of West Lake Bonney in the Taylor Valley of the McMurdo Dry Valleys in Victoria Land, East Antarctica.
Iron-rich hypersaline water sporadically emerges from small fissures in the ice cascades. The saltwater source is a subglacial pool of unknown size overlain by about of ice several kilometers from its tiny outlet at Blood Falls.
The reddish deposit was found in 1911 by the Australian geologist Thomas Griffith Taylor, who first explored the valley that bears his name. The Antarctica pioneers first attributed the red color to red algae, but later it was proven to be due to iron oxides.
Geochemistry
Poorly soluble hydrous ferric oxides are deposited at the surface of ice after the ferrous ions present in the unfrozen saltwater are oxidized in contact with atmospheric oxygen. The more soluble ferrous ions initially are dissolved in old seawater trapped in an ancient pocket remaining from the Antarctic Ocean when a fjord was isolated by the glacier in its progression during the Miocene period, some 5 million years ago, when the sea level was higher than today.
Unlike most Antarctic glaciers, the Taylor Glacier is not frozen to the bedrock, probably because of the presence of salts concentrated by the crystallization of the ancient seawater imprisoned below it. Salt cryo-concentration occurred in the deep relict seawater when pure ice crystallized and expelled its dissolved salts as it cooled down because of the heat exchange of the captive liquid seawater with the enormous ice mass of the glacier. As a consequence, the trapped seawater was concentrated in brines with a salinity two to three times that of the mean ocean water. A second mechanism sometimes also explaining the formation of hypersaline brines is the water evaporation of surface lakes directly exposed to the very dry polar atmosphere in the McMurdo Dry Valleys. Th
Document 1:::
Sea ice arises as seawater freezes. Because ice is less dense than water, it floats on the ocean's surface (as does fresh water ice, which has an even lower density). Sea ice covers about 7% of the Earth's surface and about 12% of the world's oceans. Much of the world's sea ice is enclosed within the polar ice packs in the Earth's polar regions: the Arctic ice pack of the Arctic Ocean and the Antarctic ice pack of the Southern Ocean. Polar packs undergo a significant yearly cycling in surface extent, a natural process upon which depends the Arctic ecology, including the ocean's ecosystems. Due to the action of winds, currents and temperature fluctuations, sea ice is very dynamic, leading to a wide variety of ice types and features. Sea ice may be contrasted with icebergs, which are chunks of ice shelves or glaciers that calve into the ocean. Depending on location, sea ice expanses may also incorporate icebergs.
General features and dynamics
Sea ice does not simply grow and melt. During its lifespan, it is very dynamic. Due to the combined action of winds, currents, water temperature and air temperature fluctuations, sea ice expanses typically undergo a significant amount of deformation. Sea ice is classified according to whether or not it is able to drift and according to its age.
Fast ice versus drift (or pack) ice
Sea ice can be classified according to whether or not it is attached (or frozen) to the shoreline (or between shoals or to grounded icebergs). If attached, it is called landfast ice, or more often, fast ice (from fastened). Alternatively and unlike fast ice, drift ice occurs further offshore in very wide areas and encompasses ice that is free to move with currents and winds. The physical boundary between fast ice and drift ice is the fast ice boundary. The drift ice zone may be further divided into a shear zone, a marginal ice zone and a central pack. Drift ice consists of floes, individual pieces of sea ice or more across. There are names for var
Document 2:::
Melt ponds are pools of open water that form on sea ice in the warmer months of spring and summer. The ponds are also found on glacial ice and ice shelves. Ponds of melted water can also develop under the ice, which may lead to the formation of thin underwater ice layers called false bottoms.
Melt ponds are usually darker than the surrounding ice, and their distribution and size is highly variable. They absorb solar radiation rather than reflecting it as ice does and, thereby, have a significant influence on Earth's radiation balance. This differential, which had not been scientifically investigated until recently, has a large effect on the rate of ice melting and the extent of ice cover.
Melt ponds can melt through to the ocean's surface. Seawater entering the pond increases the melt rate because the salty water of the ocean is warmer than the fresh water of the pond. The increase in salinity also depresses the water's freezing point.
Water from melt ponds over land surface can run into crevasses or moulins – tubes leading under ice sheets or glaciers – turning into meltwater. The water may reach the underlying rock. The effect is an increase in the rate of ice flow to the oceans, as the fluid behaves like a lubricant in the basal sliding of glaciers.
Effects of melt ponds
The effects of melt ponds are diverse (this subsection refers to melt ponds on ice sheets and ice shelves). Research by Ted Scambos, of the National Snow and Ice Data Center, has supported the melt water fracturing theory that suggests the melting process associated with melt ponds has a substantial effect on ice shelf disintegration.
Seasonal melt ponded and penetrating under glaciers shows seasonal acceleration and deceleration of ice flows affecting whole icesheets. Accumulated changes by ponding on ice sheets appear in the earthquake record of Greenland and other glaciers:
"Quakes ranged from six to 15 per year from 1993 to 2002, then jumped to 20 in 2003, 23 in 2004, and 32 in th
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The Younger Dryas, which occurred circa 12,900 to 11,700 years BP, was a return to glacial conditions which temporarily reversed the gradual climatic warming after the Last Glacial Maximum, which lasted from circa 27,000 to 20,000 years BP. The Younger Dryas was the last stage of the Pleistocene epoch that spanned from 2,580,000 to 11,700 years BP and it preceded the current, warmer Holocene epoch. The Younger Dryas was the most severe and longest lasting of several interruptions to the warming of the Earth's climate, and it was preceded by the Late Glacial Interstadial (also called the Bølling–Allerød interstadial), an interval of relative warmth that lasted from 14,670 to 12,900 BP.
The change was relatively sudden, took place over decades, and resulted in a decline of temperatures in Greenland by 4~10 °C (7.2~18 °F), and advances of glaciers and drier conditions over much of the temperate Northern Hemisphere. A number of theories have been put forward about the cause, and the hypothesis historically most supported by scientists is that the Atlantic meridional overturning circulation, which transports warm water from the Equator towards the North Pole, was interrupted by an influx of fresh, cold water from North America into the Atlantic. However, several issues do exist with this hypothesis, one of which is the lack of a clear geomorphological route for the meltwater. In fact, the originator of the metwater hypothesis, Wallace Broecker, stated in 2010 that "The long-held scenario that the Younger Dryas was a one-time outlier triggered by a flood of water stored in proglacial Lake Agassiz has fallen from favor due to lack of a clear geomorphic signature at the correct time and place on the landscape". A volcanic trigger has been proposed more recently, and the presence of anomalously high levels of volcanism immediately preceding the onset of the Younger Dryas has been confirmed in both ice cores and cave deposits.
The Younger Dryas did not affect the climate
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What keeps glaciers from forming in water?
A. movement
B. warmth
C. skin
D. salt
Answer:
|
|
sciq-3556
|
multiple_choice
|
What factors determine the effect of a gene?
|
[
"genomes",
"ribosomes",
"alleles",
"metabolites"
] |
C
|
Relavent Documents:
Document 0:::
The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005).
The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways.
Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated.
Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems).
Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability).
The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics.
Document 1:::
Quantitative genetics deals with quantitative traits, which are phenotypes that vary continuously (such as height or mass)—as opposed to discretely identifiable phenotypes and gene-products (such as eye-colour, or the presence of a particular biochemical).
Both branches use the frequencies of different alleles of a gene in breeding populations (gamodemes), and combine them with concepts from simple Mendelian inheritance to analyze inheritance patterns across generations and descendant lines. While population genetics can focus on particular genes and their subsequent metabolic products, quantitative genetics focuses more on the outward phenotypes, and makes only summaries of the underlying genetics.
Due to the continuous distribution of phenotypic values, quantitative genetics must employ many other statistical methods (such as the effect size, the mean and the variance) to link phenotypes (attributes) to genotypes. Some phenotypes may be analyzed either as discrete categories or as continuous phenotypes, depending on the definition of cut-off points, or on the metric used to quantify them. Mendel himself had to discuss this matter in his famous paper, especially with respect to his peas' attribute tall/dwarf, which actually was "length of stem". Analysis of quantitative trait loci, or QTL, is a more recent addition to quantitative genetics, linking it more directly to molecular genetics.
Gene effects
In diploid organisms, the average genotypic "value" (locus value) may be defined by the allele "effect" together with a dominance effect, and also by how genes interact with genes at other loci (epistasis). The founder of quantitative genetics - Sir Ronald Fisher - perceived much of this when he proposed the first mathematics of this branch of genetics.
Being a statistician, he defined the gene effects as deviations from a central value—enabling the use of statistical concepts such as mean and variance, which use this idea. The central value he chose for the ge
Document 2:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Document 3:::
Major gene is a gene with pronounced phenotype expression, in contrast to a modifier gene. Major gene characterizes common expression of oligogenic series, i.e. a small number of genes that determine the same trait.
Major genes control the discontinuous or qualitative characters in contrast of minor genes or polygenes with individually small effects. Major genes segregate and may be easily subject to mendelian analysis. The gene categorization into major and minor determinants is more or less arbitrary. Both of the two types are in all probability only end points in a more or less continuous series of gene action and gene interactions.
The term major gene was introduced into the science of inheritance by Keneth Mather (1941).
See also
Gene interaction
Minor gene
Gene
Document 4:::
Genome-wide complex trait analysis (GCTA) Genome-based restricted maximum likelihood (GREML) is a statistical method for heritability estimation in genetics, which quantifies the total additive contribution of a set of genetic variants to a trait. GCTA is typically applied to common single nucleotide polymorphisms (SNPs) on a genotyping array (or "chip") and thus termed "chip" or "SNP" heritability.
GCTA operates by directly quantifying the chance genetic similarity of unrelated individuals and comparing it to their measured similarity on a trait; if two unrelated individuals are relatively similar genetically and also have similar trait measurements, then the measured genetics are likely to causally influence that trait, and the correlation can to some degree tell how much. This can be illustrated by plotting the squared pairwise trait differences between individuals against their estimated degree of relatedness. GCTA makes a number of modeling assumptions and whether/when these assumptions are satisfied continues to be debated.
The GCTA framework has also been extended in a number of ways: quantifying the contribution from multiple SNP categories (i.e. functional partitioning); quantifying the contribution of Gene-Environment interactions; quantifying the contribution of non-additive/non-linear effects of SNPs; and bivariate analyses of multiple phenotypes to quantify their genetic covariance (co-heritability or genetic correlation).
GCTA estimates have implications for the potential for discovery from Genome-wide Association Studies (GWAS) as well as the design and accuracy of polygenic scores. GCTA estimates from common variants are typically substantially lower than other estimates of total or narrow-sense heritability (such as from twin or kinship studies), which has contributed to the debate over the Missing heritability problem.
History
Estimation in biology/animal breeding using standard ANOVA/REML methods of variance components such as heritability,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What factors determine the effect of a gene?
A. genomes
B. ribosomes
C. alleles
D. metabolites
Answer:
|
|
sciq-9516
|
multiple_choice
|
What is the main reason for weather change?
|
[
"seasons",
"moving air masses",
"moving energy masses",
"climate change"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 1:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 2:::
In earth science, global surface temperature (GST; sometimes referred to as global mean surface temperature, GMST, or global average surface temperature) is calculated by averaging the temperatures over sea and land. Periods of global cooling and global warming have alternated throughout Earth's history.
Series of reliable global temperature measurements began in the 1850—1880 time frame. Through 1940, the average annual temperature increased, but was relatively stable between 1940 and 1975. Since 1975, it has increased by roughly 0.15 °C to 0.20 °C per decade, to at least 1.1 °C (1.9 °F) above 1880 levels. The current annual GMST is about , though monthly temperatures can vary almost above or below this figure.
Sea levels have risen and fallen sharply during Earth's 4.6 billion year history. However, recent global sea level rise, driven by increasing global surface temperatures, has increased over the average rate of the past two to three thousand years. The continuation or acceleration of this trend will cause significant changes in the world's coastlines.
Background
In the 1860s, physicist John Tyndall recognized the Earth's natural greenhouse effect and suggested that slight changes in the atmospheric composition could bring about climatic variations. In 1896, a seminal paper by Swedish scientist Svante Arrhenius first predicted that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature through the greenhouse effect.
Changes in global temperatures over the past century provide evidence for the effects of increasing greenhouse gasses. When the climate system reacts to such changes, climate change follows. Measurement of the GST(global surface temperature) is one of the many lines of evidence supporting the scientific consensus on climate change, which is that humans are causing warming of Earth's climate system.
Warming oceans
With the Earth's temperature increasing, the ocean has absorbed much of th
Document 3:::
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed.
Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.
Types
The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main reason for weather change?
A. seasons
B. moving air masses
C. moving energy masses
D. climate change
Answer:
|
|
sciq-11132
|
multiple_choice
|
What is the name of the command center of the cell?
|
[
"mitochondria",
"molecules",
"nucleus",
"vacuole"
] |
C
|
Relavent Documents:
Document 0:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 3:::
Centrocones are sub-cellular structures involved in the cell division of apicomplexan parasites. Centrocones are a nuclear sub-compartment in parasites of Toxoplasma gondii that work in apposition with the centrosome to coordinate the budding process in mitosis. The centrocone concentrates and organizes various regulatory factors involved in the early stages of mitosis, including the ECR1 and TgCrk5 proteins. The membrane occupation and recognition nexus 1 (MORN1) protein is also contained in this structure and is linked to human diseases, though not much is yet known about the connection between the centrocone and the MORN1 protein.
Centrocones are located in the nuclear envelope and contain spindles that are used in mitosis. Chromosomes are contained within these spindles of the centrocone throughout the cell cycle.
Document 4:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of the command center of the cell?
A. mitochondria
B. molecules
C. nucleus
D. vacuole
Answer:
|
|
sciq-9186
|
multiple_choice
|
What does the esophagus connect to at its bottom end?
|
[
"the stomach",
"the large intestine",
"the larynx",
"the colon"
] |
A
|
Relavent Documents:
Document 0:::
The esophagus (American English) or oesophagus (British English, see spelling differences; both ; : (o)esophagi or (o)esophaguses), colloquially known also as the food pipe or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, “I carry”) + ἔφαγον (éphagon, “I ate”).
The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle.
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
Document 1:::
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
The esophagus may be affected by gastric reflux, cancer, prominent dilated blood vessels called varices that can bleed heavily, t
Document 2:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 3:::
The inferior mesenteric lymph nodes consist of:
(a) small glands on the branches of the left colic and sigmoid arteries
(b) a group in the sigmoid mesocolon, around the superior hemorrhoidal artery
(c) a pararectal group in contact with the muscular coat of the rectum
Structure
The inferior mesenteric lymph nodes are lymph nodes present throughout the hindgut.
Afferents
The inferior mesenteric lymph nodes drain structures related to the hindgut; they receive lymph from the descending colon, sigmoid colon, and proximal part of the rectum.
Efferents
They drain into the superior mesenteric lymph nodes and ultimately to the preaortic lymph nodes. Lymph nodes surrounding the inferior mesenteric artery drain directly into the preaortic nodes.
Clinical significance
Colorectal cancer may metastasise to the inferior mesenteric lymph nodes. For this reason, the inferior mesenteric artery may be removed in people with lymph node-positive cancer. This has been proposed since at least 1908, by surgeon William Ernest Miles.
Additional images
Document 4:::
The esophageal glands are glands that are part of the digestive system of various animals, including humans.
In humans
Esophageal glands in humans are a part of a human digestive system. They are a small compound racemose exocrine glands of the mucous type.
There are two types:
Esophageal glands proper- mucous glands located in the submucosa. They are compound tubulo-alveolar glands. Some serous cells are present. These glands are more numerous in the upper third of the esophagus. They secrete acid mucin for lubrication.
Esophageal cardiac glands- mucous glands located near the cardiac orifice (esophago-gastric junction) in the lamina propria mucosae. They secrete neutral mucin that protects the esophagus from acidic gastric juices. They are simple tubular or branched tubular glands.
There are also mucous glands present at the pharyngo-esophageal junction in the lamina propria mucosae. These are simple tubular or branched tubular glands.
Each opens upon the surface by a long excretory duct.
In monoplacophorans
Oesophageal gland is enlarged in large monoplacophoran species.
In gastropods
Oesophageal gland or oesophageal pouch is a part of the digestive system of some gastropods. Oesophageal gland or pouch is a common feature in so-called basal gastropod clades, including Patelloidea, Vetigastropoda, Cocculiniformia, Neritimorpha and Neomphalina.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the esophagus connect to at its bottom end?
A. the stomach
B. the large intestine
C. the larynx
D. the colon
Answer:
|
|
sciq-9953
|
multiple_choice
|
What is the name of the disorder of the arteries in which cholesterol and other materials are deposited on the interior of the arterial wall?
|
[
"arthritis",
"atherosclerosis",
"anemia",
"paralysis"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of pathology mnemonics, categorized and alphabetized. For mnemonics in other medical specialities, see this list of medical mnemonics.
Acute intermittent porphyria: signs and symptoms
5 Ps:
Pain in the abdomen
Polyneuropathy
Psychological abnormalities
Pink urine
Precipitated by drugs (including barbiturates, oral contraceptives, and sulfa drugs)
Acute ischemia: signs [especially limbs]
6 P's:
Pain
Pallor
Pulselessness
Paralysis
Paraesthesia
Perishingly cold
Anemia (normocytic): causes
:
Acute blood loss
Bone marrow failure
Chronic disease
Destruction (hemolysis)
Anemia causes (simplified)
ANEMIA:
Anemia of chronic disease
No folate or B12
Ethanol
Marrow failure & hemaglobinopathies
Iron deficient
Acute & chronic blood loss
Atherosclerosis risk factors
"You're a SAD BET with these risk factors":
Sex: male
Age: middle-aged, elderly
Diabetes mellitus
BP high: hypertension
Elevated cholesterol
Tobacco
Carcinoid syndrome: components
CARCinoid:
Cutaneous flushing
Asthmatic wheezing
Right sided valvular heart lesions
Cramping and diarrhea
Cushing syndrome
CUSHING:
Central obesity/ Cervical fat pads/ Collagen fiber weakness/ Comedones (acne)
Urinary free corisol and glucose increase
Striae/ Suppressed immunity
Hypercortisolism/ Hypertension/ Hyperglycemia/ Hirsutism
Iatrogenic (Increased administration of corticosteroids)
Noniatrogenic (Neoplasms)
Glucose intolerance/Growth retardation
Diabetic ketoacidosis: I vs. II
ketONEbodies are seen in type ONEdiabetes.
Gallstones: risk factors
5 F's:
Fat
Female
Fair (gallstones more common in Caucasians)
Fertile (premenopausal- increased estrogen is thought to increase cholesterol levels in bile and decrease gallbladder contractions)
Forty or above (age)
Hepatomegaly: 3 common causes, 3 rarer causes
Common are 3 C's:
Cirrhosis
Carcinoma
Cardiac failure
Rarer are 3 C's:
Cholestasis
Cysts
Cellular infiltration
Hyperkalemia (signs and symptoms)
MURDER
Mus
Document 1:::
The coronary arteries are the arterial blood vessels of coronary circulation, which transport oxygenated blood to the heart muscle. The heart requires a continuous supply of oxygen to function and survive, much like any other tissue or organ of the body.
The coronary arteries wrap around the entire heart. The two main branches are the left coronary artery and right coronary artery. The arteries can additionally be categorized based on the area of the heart for which they provide circulation. These categories are called epicardial (above the epicardium, or the outermost tissue of the heart) and microvascular (close to the endocardium, or the innermost tissue of the heart).
Reduced function of the coronary arteries can lead to decreased flow of oxygen and nutrients to the heart. Not only does this affect supply to the heart muscle itself, but it also can affect the ability of the heart to pump blood throughout the body. Therefore, any disorder or disease of the coronary arteries can have a serious impact on health, possibly leading to angina, a heart attack, and even death.
Structure
The coronary arteries are mainly composed of the left and right coronary arteries, both of which give off several branches, as shown in the 'coronary artery flow' figure.
Aorta
Left coronary artery
Left anterior descending artery
Left circumflex artery
Posterior descending artery
Ramus or intermediate artery
Right coronary artery
Right marginal artery
Posterior descending artery
The left coronary artery arises from the aorta within the left cusp of the aortic valve and feeds blood to the left side of the heart. It branches into two arteries, the left anterior descending and the left circumflex. The left anterior descending artery perfuses the interventricular septum and anterior wall of the left ventricle. The left circumflex artery perfuses the left ventricular free wall. In approximately 33% of individuals, the left coronary artery gives rise to the posterior descending artery wh
Document 2:::
Animal models of stroke are procedures undertaken in animals (including non-human primates) intending to provoke pathophysiological states that are similar to those of human stroke to study basic processes or potential therapeutic interventions in this disease. Aim is the extension of the knowledge on and/or the improvement of medical treatment of human stroke.
Classification by cause
The term stroke subsumes cerebrovascular disorders of different etiologies, featuring diverse pathophysiological processes. Thus, for each stroke etiology one or more animal models have been developed:
Animal models of ischemic stroke
Animal models of intracerebral hemorrhage
Animal models of subarachnoid hemorrhage and cerebral vasospasm
Animal models of sinus vein thrombosis
Transferability of animal results to human stroke
Although multiple therapies have proven to be effective in animals, only very few have done so in human patients. Reasons for this are (Dirnagl 1999):
Side effects: Many highly potent neuroprotective drugs display side effects which inhibit the application of effective doses in patients (e.g. MK-801)
Delay: Whereas in animal studies the time of incidence onset is known and therapy can be started early, patients often present with delay and unclear time of symptom onset
“Age and associated illnesses: Most experimental studies are conducted on healthy, young animals under rigorously controlled laboratory conditions. However, the typical stroke patient is elderly with numerous risk factors and complicating diseases (for example, diabetes, hypertension and heart diseases)” (Dirnagl 1999)
Morphological and functional differences between the brain of humans and animals: Although the basic mechanisms of stroke are identical between humans and other mammals, there are differences.
Evaluation of efficacy: In animals, treatment effects are mostly measured as a reduction of lesion volume, whereas in human studies functional evaluation (which reflects the severity of disabi
Document 3:::
Cervical artery dissection is dissection of one of the layers that compose the carotid and vertebral artery in the neck (cervix). They include:
Carotid artery dissection, a separation of the layers of the artery wall supplying oxygen-bearing blood to the head and brain.
Vertebral artery dissection, a flap-like tear of the inner lining of the vertebral artery that supply blood to the brain and spinal cord.
Cervical dissections can be broadly classified as either "spontaneous" or traumatic. Cervical artery dissections are a significant cause of strokes in young adults.
A dissection typically results in a tear in one of the layers of the arterial wall. The result of this tear is often an intramural hematoma and/or aneurysmal dilation in the arteries leading to the intracranial area.
Signs and symptoms of a cervical artery dissection are often non-specific and can be localized or generalized. There is no specific treatment, although most patients are either given an anti-platelet or anti-coagulation agent to prevent or treat strokes.
Epidemiology
Cervical artery dissection has been noted to be a common cause of young adult strokes, with some sources indicating a prevalence of up to 20% in this young adult population with annual incidence rates between 2.6 and 2.9 per 100,000, although these incidences may be misleading with true incidences being higher because clinical presentations can vary, many being minor or self-limited, and thus these dissections can go undiagnosed. In population-based studies, the peak age of presentation is approximately 45 years with a slight gender predisposition towards males (53-57%).
Cervical arteries, as mentioned above, consist of two pairs of arteries: vertebral and carotid. As such, cervical artery dissection can be further categorized based on the involvement of artery: carotid vs. vertebral, and the location of the dissection: intracranial vs. extracranial.
Causes
The two main causes of cervical artery dissection can be broad
Document 4:::
Watershed area is the medical term referring to regions of the body, that receive dual blood supply from the most distal branches of two large arteries, such as the splenic flexure of the large intestine. The term refers metaphorically to a geological watershed, or drainage divide, which separates adjacent drainage basins.
During times of blockage of one of the arteries that supply the watershed area, such as in atherosclerosis, these regions are spared from ischemia by virtue of their dual supply. However, during times of systemic hypoperfusion, such as in disseminated intravascular coagulation or heart failure, these regions are particularly vulnerable to ischemia because they are supplied by the most distal branches of their arteries, and thus the least likely to receive sufficient blood.
Watershed areas are found in the brain, where areas are perfused by both the anterior and middle cerebral arteries, and in the intestines, where areas are perfused by both the superior and inferior mesenteric arteries (i.e., splenic flexure). Additionally, the sigmoid colon and rectum form a watershed zone with blood supply from inferior mesenteric, pudendal and iliac circulations. Hypoperfusion in watershed areas can lead to mural and mucosal infarction in the case of ischemic bowel disease. When watershed stroke occurs in the brain, it produces unique focal neurologic symptoms that aid clinicians in diagnosis and localization. For example, a cerebral watershed area is situated in the dorsal prefrontal cortex; when it is affected on the left side, this can lead to transcortical motor aphasia.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of the disorder of the arteries in which cholesterol and other materials are deposited on the interior of the arterial wall?
A. arthritis
B. atherosclerosis
C. anemia
D. paralysis
Answer:
|
|
sciq-2180
|
multiple_choice
|
What is the type of reproduction where part of the parent plant is used to generate a new plant?
|
[
"byproduct reproduction",
"coaxed reproduction",
"asexual reproduction",
"sexual reproduction"
] |
C
|
Relavent Documents:
Document 0:::
Plant reproduction is the production of new offspring in plants, which can be accomplished by sexual or asexual reproduction. Sexual reproduction produces offspring by the fusion of gametes, resulting in offspring genetically different from either parent. Asexual reproduction produces new individuals without the fusion of gametes, resulting in clonal plants that are genetically identical to the parent plant and each other, unless mutations occur.
Asexual reproduction
Asexual reproduction does not involve the production and fusion of male and female gametes. Asexual reproduction may occur through budding, fragmentation, spore formation, regeneration and vegetative propagation.
Asexual reproduction is a type of reproduction where the offspring comes from one parent only, thus inheriting the characteristics of the parent. Asexual reproduction in plants occurs in two fundamental forms, vegetative reproduction and agamospermy. Vegetative reproduction involves a vegetative piece of the original plant producing new individuals by budding, tillering, etc. and is distinguished from apomixis, which is a replacement of sexual reproduction, and in some cases involves seeds. Apomixis occurs in many plant species such as dandelions (Taraxacum species) and also in some non-plant organisms. For apomixis and similar processes in non-plant organisms, see parthenogenesis.
Natural vegetative reproduction is a process mostly found in perennial plants, and typically involves structural modifications of the stem or roots and in a few species leaves. Most plant species that employ vegetative reproduction do so as a means to perennialize the plants, allowing them to survive from one season to the next and often facilitating their expansion in size. A plant that persists in a location through vegetative reproduction of individuals gives rise to a clonal colony. A single ramet, or apparent individual, of a clonal colony is genetically identical to all others in the same colony. The dist
Document 1:::
Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction.
Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators.
Use of sexual terminology
Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm).
In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (inc
Document 2:::
Vegetative reproduction (also known as vegetative propagation, vegetative multiplication or cloning) is any form of asexual reproduction occurring in plants in which a new plant grows from a fragment or cutting of the parent plant or specialized reproductive structures, which are sometimes called vegetative propagules.
Many plants naturally reproduce this way, but it can also be induced artificially. Horticulturists have developed asexual propagation techniques that use vegetative propagules to replicate plants. Success rates and difficulty of propagation vary greatly. Monocotyledons typically lack a vascular cambium, making them more challenging to propagate.
Background
Plant propagation is the process of plant reproduction of a species or cultivar, and it can be sexual or asexual. It can happen through the use of vegetative parts of the plants, such as leaves, stems, and roots to produce new plants or through growth from specialized vegetative plant parts.
While many plants reproduce by vegetative reproduction, they rarely exclusively use that method to reproduce. Vegetative reproduction is not evolutionary advantageous; it does not allow for genetic diversity and could lead plants to accumulate deleterious mutations. Vegetative reproduction is favored when it allows plants to produce more offspring per unit of resource than reproduction through seed production. In general, juveniles of a plant are easier to propagate vegetatively.
Although most plants normally reproduce sexually, many can reproduce vegetatively, or can be induced to do so via hormonal treatments. This is because meristematic cells capable of cellular differentiation are present in many plant tissues.
Vegetative propagation is usually considered a cloning method. However, root cuttings of thornless blackberries (Rubus fruticosus) will revert to thorny type because the adventitious shoot develops from a cell that is genetically thorny. Thornless blackberry is a chimera, with the epidermal
Document 3:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 4:::
Sterile males are deliberately produced by humans in several species for several unrelated purposes:
Sterile insect technique for insect pest control
Cytoplasmic male sterility for plant breeding
Sterile male plant for plant breeding
Humans and other species
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the type of reproduction where part of the parent plant is used to generate a new plant?
A. byproduct reproduction
B. coaxed reproduction
C. asexual reproduction
D. sexual reproduction
Answer:
|
|
sciq-1240
|
multiple_choice
|
Aging occurs as cells lose their ability to do what?
|
[
"divide",
"join",
"build",
"grow"
] |
A
|
Relavent Documents:
Document 0:::
The stem cell theory of aging postulates that the aging process is the result of the inability of various types of stem cells to continue to replenish the tissues of an organism with functional differentiated cells capable of maintaining that tissue's (or organ's) original function. Damage and error accumulation in genetic material is always a problem for systems regardless of the age. The number of stem cells in young people is very much higher than older people and thus creates a better and more efficient replacement mechanism in the young contrary to the old. In other words, aging is not a matter of the increase in damage, but a matter of failure to replace it due to a decreased number of stem cells. Stem cells decrease in number and tend to lose the ability to differentiate into progenies or lymphoid lineages and myeloid lineages.
Maintaining the dynamic balance of stem cell pools requires several conditions. Balancing proliferation and quiescence along with homing (See niche) and self-renewal of hematopoietic stem cells are favoring elements of stem cell pool maintenance while differentiation, mobilization and senescence are detrimental elements. These detrimental effects will eventually cause apoptosis.
There are also several challenges when it comes to therapeutic use of stem cells and their ability to replenish organs and tissues. First, different cells may have different lifespans even though they originate from the same stem cells (See T-cells and erythrocytes), meaning that aging can occur differently in cells that have longer lifespans as opposed to the ones with shorter lifespans. Also, continual effort to replace the somatic cells may cause exhaustion of stem cells.
Research
Some of the proponents of this theory have been Norman E. Sharpless, Ronald A. DePinho, Huber Warner, Alessandro Testori and others. Warner came to this conclusion after analyzing human case of Hutchinson's Gilford syndrome and mouse models of accelerated aging.
Stem cells wil
Document 1:::
Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u
Document 2:::
A progenitor cell is a biological cell that can differentiate into a specific cell type. Stem cells and progenitor cells have this ability in common. However, stem cells are less specified than progenitor cells. Progenitor cells can only differentiate into their "target" cell type. The most important difference between stem cells and progenitor cells is that stem cells can replicate indefinitely, whereas progenitor cells can divide only a limited number of times. Controversy about the exact definition remains and the concept is still evolving.
The terms "progenitor cell" and "stem cell" are sometimes equated.
Properties
Most progenitors are identified as oligopotent. In this point of view, they can compare to adult stem cells, but progenitors are said to be in a further stage of cell differentiation. They are "midway" between stem cells and fully differentiated cells. The kind of potency they have depends on the type of their "parent" stem cell and also on their niche. Some research found that progenitor cells were mobile and that these progenitor cells could move through the body and migrate towards the tissue where they are needed. Many properties are shared by adult stem cells and progenitor cells.
Research
Progenitor cells have become a hub for research on a few different fronts. Current research on progenitor cells focuses on two different applications: regenerative medicine and cancer biology. Research on regenerative medicine has focused on progenitor cells, and stem cells, because their cellular senescence contributes largely to the process of aging. Research on cancer biology focuses on the impact of progenitor cells on cancer responses, and the way that these cells tie into the immune response.
The natural aging of cells, called their cellular senescence, is one of the main contributors to aging on an organismal level. There are a few different ideas to the cause behind why aging happens on a cellular level. Telomere length has been shown to positive
Document 3:::
The network theory of aging supports the idea that multiple connected processes contribute to the biology of aging. Kirkwood and Kowald helped to establish the first model of this kind by connecting theories and predicting specific mechanisms. In departure of investigating a single mechanistic cause or single molecules that lead to senescence, the network theory of aging takes a systems biology view to integrate theories in conjunction with computational models and quantitative data related to the biology of aging.
Implications
The free radical theory, describing the reactions of free radicals, antioxidants and proteolytic enzymes, was computationally connected with the protein error theory to describe the error propagation loops within the cellular translation machinery.
The study of gene networks revealed proteins associated with aging to have significantly higher connectivity than expected by chance.
Investigation of aging on multiple levels of biological organization contributed to a physiome view, from genes to organisms, predicting lifespans based on scaling laws, fractal supply networks and metabolism as well as aging related molecular networks.
The network theory of aging has encouraged the development of data bases related to human aging. Proteomic network maps suggest a relationship between the genetics of development and the genetics of aging.
Hierarchical Elements
The network theory of aging provides a deeper look at the damage and repair processes at the cellular level and the ever changing balance between those processes. To fully understand the network theory as its applied to aging you must look at the different hierarchical elements of the theory as it pertains to aging.
Elementary particles of quantum systems- The aging process is described as an equation where a structure in an unbalanced state begins to change and that is seen primarily in the actions of quantum particles.
Monomers of biological macro-molecules- After a while, differen
Document 4:::
Stem Cells is a peer-review scientific journal of cell biology. It was established as The International Journal of Cell Cloning in 1983, acquiring its current title in 1993.
The journal is published by AlphaMed Press, and is currently edited by Jan Nolta (University of California). Stem Cells currently has an impact factor of 6.277.
Abstracting and indexing
The journal is abstracted and indexed in the following bibliographic databases:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Aging occurs as cells lose their ability to do what?
A. divide
B. join
C. build
D. grow
Answer:
|
|
scienceQA-2533
|
multiple_choice
|
What do these two changes have in common?
a piece of pear turning brown
baking cookies
|
[
"Both are chemical changes.",
"Both are caused by cooling.",
"Both are only physical changes.",
"Both are caused by heating."
] |
A
|
Step 1: Think about each change.
A piece of a pear turning brown is a chemical change. The substances in the pear react with oxygen in the air and turn into a different type of matter.
If you scrape off the brown part of the pear, the inside will still be white. The inside hasn't touched the air. So the chemical change hasn't happened to that part of the pear.
Baking cookies is a chemical change. The type of matter in the cookie dough changes when it is baked. The cookie dough turns into cookies!
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Baking is caused by heating. But a piece of pear turning brown is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 3:::
In cooking, proofing (also called proving) is a step in the preparation of yeast bread and other baked goods in which the dough is allowed to rest and rise a final time before baking. During this rest period, yeast ferments the dough and produces gases, thereby leavening the dough.
In contrast, proofing or blooming yeast (as opposed to proofing the dough) may refer to the process of first suspending yeast in warm water, a necessary hydration step when baking with active dry yeast. Proofing can also refer to the process of testing the viability of dry yeast by suspending it in warm water with carbohydrates (sugars). If the yeast is still alive, it will feed on the sugar and produce a visible layer of foam on the surface of the water mixture.
Fermentation rest periods are not always explicitly named, and can appear in recipes as "Allow dough to rise." When they are named, terms include "bulk fermentation", "first rise", "second rise", "final proof" and "shaped proof".
Dough processes
The process of making yeast-leavened bread involves a series of alternating work and rest periods. Work periods occur when the dough is manipulated by the baker. Some work periods are called mixing, kneading, and folding, as well as division, shaping, and panning. Work periods are typically followed by rest periods, which occur when dough is allowed to sit undisturbed. Particular rest periods include, but are not limited to, autolyse, bulk fermentation and proofing. Proofing, also sometimes called final fermentation, is the specific term for allowing dough to rise after it has been shaped and before it is baked.
Some breads begin mixing with an autolyse. This refers to a period of rest after the initial mixing of flour and water, a rest period that occurs sequentially before the addition of yeast, salt and other ingredients. This rest period allows for better absorption of water and helps the gluten and starches to align. The autolyse is credited to Raymond Calvel, who recommende
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
a piece of pear turning brown
baking cookies
A. Both are chemical changes.
B. Both are caused by cooling.
C. Both are only physical changes.
D. Both are caused by heating.
Answer:
|
sciq-9077
|
multiple_choice
|
What transmits nerve impulses to other cells?
|
[
"muscles",
"neurons",
"ions",
"fats"
] |
B
|
Relavent Documents:
Document 0:::
In neuroscience, nerve conduction velocity (CV) is the speed at which an electrochemical impulse propagates down a neural pathway. Conduction velocities are affected by a wide array of factors, which include age, sex, and various medical conditions. Studies allow for better diagnoses of various neuropathies, especially demyelinating diseases as these conditions result in reduced or non-existent conduction velocities. CV is an important aspect of nerve conduction studies.
Normal conduction velocities
Ultimately, conduction velocities are specific to each individual and depend largely on an axon's diameter and the degree to which that axon is myelinated, but the majority of 'normal' individuals fall within defined ranges.
Nerve impulses are extremely slow compared to the speed of electricity, where the electric field can propagate with a speed on the order of 50–99% of the speed of light; however, it is very fast compared to the speed of blood flow, with some myelinated neurons conducting at speeds up to 120 m/s (432 km/h or 275 mph).
Different sensory receptors are innervated by different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers, and nociceptors and thermoreceptors by type III and IV sensory fibers.
Normal impulses in peripheral nerves of the legs travel at 40–45 m/s, and those in peripheral nerves of the arms at 50–65 m/s.
Largely generalized, normal conduction velocities for any given nerve will be in the range of 50–60 m/s.
Testing methods
Nerve conduction studies
Nerve conduction velocity is just one of many measurements commonly made during a nerve conduction study (NCS). The purpose of these studies is to determine whether nerve damage is present and how severe that damage may be.
Nerve conduction studies are performed as follows:
Two electrodes are attached to the subject's skin over the nerve being tested.
Electrical impulses are sent through one elec
Document 1:::
Non-spiking neurons are neurons that are located in the central and peripheral nervous systems and function as intermediary relays for sensory-motor neurons. They do not exhibit the characteristic spiking behavior of action potential generating neurons.
Non-spiking neural networks are integrated with spiking neural networks to have a synergistic effect in being able to stimulate some sensory or motor response while also being able to modulate the response.
Discovery
Animal models
There are an abundance of neurons that propagate signals via action potentials and the mechanics of this particular kind of transmission is well understood. Spiking neurons exhibit action potentials as a result of a neuron characteristic known as membrane potential. Through studying these complex spiking networks in animals, a neuron that did not exhibit characteristic spiking behavior was discovered. These neurons use a graded potential to transmit data as they lack the membrane potential that spiking neurons possess. This method of transmission has a huge effect on the fidelity, strength, and lifetime of the signal. Non-spiking neurons were identified as a special kind of interneuron and function as an intermediary point of process for sensory-motor systems. Animals have become substantial models for understanding more about non-spiking neural networks and the role they play in an animal’s ability to process information and its overall function. Animal models indicate that the interneurons modulate directional and posture coordinating behaviors.
Crustaceans and arthropods such as the crawfish have created many opportunities to learn about the modulatory role that these neurons have in addition to their potential to be modulated regardless of their lack of exhibiting spiking behavior. Most of the known information about nonspiking neurons is derived from animal models. Studies focus on neuromuscular junctions and modulation of abdominal motor cells. Modulatory interneurons are neurons
Document 2:::
A motor nerve is a nerve that transmits motor signals from the central nervous system (CNS) to the muscles of the body. This is different from the motor neuron, which includes a cell body and branching of dendrites, while the nerve is made up of a bundle of axons. Motor nerves act as efferent nerves which carry information out from the CNS to muscles, as opposed to afferent nerves (also called sensory nerves), which transfer signals from sensory receptors in the periphery to the CNS. Efferent nerves can also connect to glands or other organs/issues instead of muscles (and so motor nerves are not equivalent to efferent nerves). In addition, there are nerves that serve as both sensory and motor nerves called mixed nerves.
Structure and function
Motor nerve fibers transduce signals from the CNS to peripheral neurons of proximal muscle tissue. Motor nerve axon terminals innervate skeletal and smooth muscle, as they are heavily involved in muscle control. Motor nerves tend to be rich in acetylcholine vesicles because the motor nerve, a bundle of motor nerve axons that deliver motor signals and signal for movement and motor control. Calcium vesicles reside in the axon terminals of the motor nerve bundles. The high calcium concentration outside of presynaptic motor nerves increases the size of end-plate potentials (EPPs).
Protective tissues
Within motor nerves, each axon is wrapped by the endoneurium, which is a layer of connective tissue that surrounds the myelin sheath. Bundles of axons are called fascicles, which are wrapped in perineurium. All of the fascicles wrapped in the perineurium are wound together and wrapped by a final layer of connective tissue known as the epineurium. These protective tissues defend nerves from injury, pathogens and help to maintain nerve function. Layers of connective tissue maintain the rate at which nerves conduct action potentials.
Spinal cord exit
Most motor pathways originate in the motor cortex of the brain. Signals run down th
Document 3:::
Microelectrode arrays (MEAs) (also referred to as multielectrode arrays) are devices that contain multiple (tens to thousands) microelectrodes through which neural signals are obtained or delivered, essentially serving as neural interfaces that connect neurons to electronic circuitry. There are two general classes of MEAs: implantable MEAs, used in vivo, and non-implantable MEAs, used in vitro.
Theory
Neurons and muscle cells create ion currents through their membranes when excited, causing a change in voltage between the inside and the outside of the cell. When recording, the electrodes on an MEA transduce the change in voltage from the environment carried by ions into currents carried by electrons (electronic currents). When stimulating, electrodes transduce electronic currents into ionic currents through the media. This triggers the voltage-gated ion channels on the membranes of the excitable cells, causing the cell to depolarize and trigger an action potential if it is a neuron or a twitch if it is a muscle cell.
The size and shape of a recorded signal depend upon several factors: the nature of the medium in which the cell or cells are located (e.g. the medium's electrical conductivity, capacitance, and homogeneity); the nature of contact between the cells and the MEA electrode (e.g. area of contact and tightness); the nature of the MEA electrode itself (e.g. its geometry, impedance, and noise); the analog signal processing (e.g. the system's gain, bandwidth, and behavior outside of cutoff frequencies); and the data sampling properties (e.g. sampling rate and digital signal processing). For the recording of a single cell that partially covers a planar electrode, the voltage at the contact pad is approximately equal to the voltage of the overlapping region of the cell and electrode multiplied by the ratio the surface area of the overlapping region to the area of the entire electrode, or:
assuming the area around an electrode is well-insulated and has a very s
Document 4:::
The medullary command nucleus (MCN), also called the pacemaker nucleus, is a group of nerve cells found in the bodies of weakly electric fish. It controls the function of electrocytes by regulating the frequency of electrical impulses. Signals originating in the MCN are transmitted to electrocytes, where changes in ion concentration cause electrical charges to be generated. The nucleus both sends and receives signals, thereby acting as a regulator and central processor for the electro sensors in the fish's body. Inputs into the MCN originate in the mesencephalic precommand nucleus, thalamic dorsal posterior nucleus, and toral ventroposterior nucleus. All of these nuclei have dense projections into the MCN, with the exception of the Toral Ventroposterior nucleus, which contain only a ventral edge projection.
See also
Electric organ
Electric fish
External links
Electric fish
Fish nervous system
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What transmits nerve impulses to other cells?
A. muscles
B. neurons
C. ions
D. fats
Answer:
|
|
sciq-6943
|
multiple_choice
|
Many ionic compounds with relatively large cations and a 1:1 cation:anion ratio have this structure, which is called the what?
|
[
"boron chloride structure",
"cesium chloride structure",
"hydrocarbon structure",
"analogous structure"
] |
B
|
Relavent Documents:
Document 0:::
The structure of the anhydrous and dihydrated forms have been determined by X-ray crystallography and the structure of the monohydrate was determined by electron crystallography. The dihydrate (shown in table above) as well as the
Document 1:::
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma
Document 2:::
Metals, and specifically rare-earth elements, form numerous chemical complexes with boron. Their crystal structure and chemical bonding depend strongly on the metal element M and on its atomic ratio to boron. When B/M ratio exceeds 12, boron atoms form B12 icosahedra which are linked into a three-dimensional boron framework, and the metal atoms reside in the voids of this framework. Those icosahedra are basic structural units of most allotropes of boron and boron-rich rare-earth borides. In such borides, metal atoms donate electrons to the boron polyhedra, and thus these compounds are regarded as electron-deficient solids.
The crystal structures of many boron-rich borides can be attributed to certain types including MgAlB14, YB66, REB41Si1.2, B4C and other, more complex types such as RExB12C0.33Si3.0. Some of these formulas, for example B4C, YB66 and MgAlB14, historically reflect the idealistic structures, whereas the experimentally determined composition is nonstoichiometric and corresponds to fractional indexes. Boron-rich borides are usually characterized by large and complex unit cells, which can contain more than 1500 atomic sites and feature extended structures shaped as "tubes" and large modular polyhedra ("superpolyhedra"). Many of those sites have partial occupancy, meaning that the probability to find them occupied with a certain atom is smaller than one and thus that only some of them are filled with atoms. Scandium is distinguished among the rare-earth elements by that it forms numerous borides with uncommon structure types; this property of scandium is attributed to its relatively small atomic and ionic radii.
Crystals of the specific rare-earth boride YB66 are used as X-ray monochromators for selecting X-rays with certain energies (in the 1–2 keV range) out of synchrotron radiation. Other rare-earth borides may find application as thermoelectric materials, owing to their low thermal conductivity; the latter originates from their complex, "amorphous-l
Document 3:::
[()2]x+ [(Xn−)x/n · y]x-,
where Xn− is the intercalating anion (or anions).
Most commonly, = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+ or Zn2+, and is another trivalent cation, possibly of the same element. Fixed-composition phases have been shown to exist over the rang
Document 4:::
Stannide ions,
Some examples of stannide Zintl ions are listed below. Some of them contain 2-centre 2-electron bonds (2c-2e), others are "electron deficient" and bonding sometimes can be described using polyhedral skeletal electron pair theory (Wade's rules) where the number of valence electrons contributed by each tin atom is considered to be 2 (the s electrons do not contribute). There are some examples of silicide and plumbide ions with similar structures, for example tetrahedral , the chain anion (Si2−)n, and .
Sn4− found for example in Mg2Sn.
, tetrahedral with 2c-2e bonds e.g. in CsSn.
, tetrahedral closo-cluster with 10 electrons (2n + 2).
(Sn2−)n zig-zag chain polymeric anion with 2c-2e bonds found for example in BaSn.
closo-
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Many ionic compounds with relatively large cations and a 1:1 cation:anion ratio have this structure, which is called the what?
A. boron chloride structure
B. cesium chloride structure
C. hydrocarbon structure
D. analogous structure
Answer:
|
|
sciq-6739
|
multiple_choice
|
What do sharks use to secrete salt to assist in osmoregulation?
|
[
"ceramic gland",
"rectal gland",
"blood gland",
"bladder gland"
] |
B
|
Relavent Documents:
Document 0:::
An ionocyte (formerly called a chloride cell) is a mitochondrion-rich cell within ionoregulatory organs of animals, such as teleost fish gill, insect Malpighian tubules, crustacean gills, antennal glands and maxillary glands, and copepod Crusalis organs. These cells contribute to the maintenance of optimal osmotic, ionic, and acid-base levels within metazoans. In aquatic invertebrates, ionocytes perform the functions of both ion uptake and ion excretion. In marine teleost fish, by expending energy to power the enzyme Na+/K+-ATPase and in coordination with other protein transporters, ionocytes pump excessive sodium and chloride ions against the concentration gradient into the ocean. Conversely, freshwater teleost ionocytes use this low intracellular environment to attain sodium and chloride ions into the organism, and also against the concentration gradient. In larval fishes with underdeveloped / developing gills, ionocytes can be found on the skin and fins.
Mechanism of action
Marine teleost fishes consume large quantities of seawater to reduce osmotic dehydration. The excess of ions absorbed from seawater is pumped out of the teleost fishes via the ionocytes. These cells use active transport on the basolateral (internal) surface to accumulate chloride, which then diffuses out of the apical (external) surface and into the surrounding environment. Such mitochondrion-rich cells are found in both the gill lamellae and filaments of teleost fish. Using a similar mechanism, freshwater teleost fish use these cells to take in salt from their dilute environment to prevent hyponatremia from water diffusing into the fish. In the context of freshwater fish, ionocytes are often referred to as "mitochondria-rich cells", to emphasis their high density of mitochondria.
See also
Pulmonary ionocyte - a rare type of specialised cell that may regulate mucus viscosity in humans
Document 1:::
Osmoconformers are marine organisms that maintain an internal environment which is isotonic to their external environment. This means that the osmotic pressure of the organism's cells is equal to the osmotic pressure of their surrounding environment. By minimizing the osmotic gradient, this subsequently minimizes the net influx and efflux of water into and out of cells. Even though osmoconformers have an internal environment that is isosmotic to their external environment, the types of ions in the two environments differ greatly in order to allow critical biological functions to occur.
An advantage of osmoconformation is that such organisms don’t need to expend as much energy as osmoregulators in order to regulate ion gradients. However, to ensure that the correct types of ions are in the desired location, a small amount of energy is expended on ion transport. A disadvantage to osmoconformation is that the organisms are subject to changes in the osmolarity of their environment.
Examples
Invertebrates
Most osmoconformers are marine invertebrates such as echinoderms (such as starfish), mussels, marine crabs, lobsters, jellyfish, ascidians (sea squirts - primitive chordates), and scallops. Some insects are also osmoconformers. Some osmoconformers, such as echinoderms, are stenohaline, which means they can only survive in a limited range of external osmolarities. The survival of such organisms is thus contingent on their external osmotic environment remaining relatively constant. On the other hand, some osmoconformers are classified as euryhaline, which means they can survive in a broad range of external osmolarities. Mussels are a prime example of a euryhaline osmoconformer. Mussels have adapted to survive in a broad range of external salinities due to their ability to close their shells which allows them to seclude themselves from unfavorable external environments.
Craniates
There are a couple of examples of osmoconformers that are craniates such as ha
Document 2:::
A shark repellent is any method of driving sharks away from an area. Shark repellents are a category of animal repellents. Shark repellent technologies include magnetic shark repellent, electropositive shark repellents, electrical repellents, and semiochemicals. Shark repellents can be used to protect people from sharks by driving the sharks away from areas where they are likely to kill human beings. In other applications, they can be used to keep sharks away from areas they may be a danger to themselves due to human activity. In this case, the shark repellent serves as a shark conservation method. There are some naturally occurring shark repellents; modern artificial shark repellents date to at least the 1940s, with the United States Navy using them in the Pacific Ocean theater of World War II.
Natural repellents
It has traditionally been believed that sharks are repelled by the smell of a dead shark; however, modern research has had mixed results.
The Pardachirus marmoratus fish (finless sole, Red Sea Moses sole) repels sharks through its secretions. The best-understood factor is pardaxin, acting as an irritant to the sharks' gills, but other chemicals have been identified as contributing to the repellent effect.
In 2017, the US Navy announced that it was developing a synthetic analog of hagfish slime with potential application as a shark repellent.
History
Some of the earliest research on shark repellents took place during the Second World War when military services sought to minimize the risk to stranded aviators and sailors in the water. Research has continued to the present, with notable researchers including Americans Eugenie Clark, and later Samuel H. Gruber, who has conducted tests at the Bimini Sharklab in Bimini, and the Japanese scientist Kazuo Tachibana. Future celebrity chef Julia Child developed shark repellent while working for the Office of Strategic Services
Initial work, which was based on historical research and studies at the time, focused
Document 3:::
Electropositive metals (EPMs) are a new class of shark repellent materials that produce a measurable voltage when immersed in an electrolyte such as seawater. The voltages produced are as high as 1.75 VDC in seawater. It is hypothesized that this voltage overwhelms the ampullary organ in sharks, producing a repellent action. Since bony fish lack the ampullary organ, the repellent is selective to sharks and rays. The process is electrochemical, so no external power input is required. As chemical work is done, the metal is lost in the form of corrosion. Depending on the alloy or metal utilized and its thickness, the electropositive repellent effect lasts up to 48 hours. The reaction of the electropositive metal in seawater produces hydrogen gas bubbles and an insoluble nontoxic hydroxide as a precipitate which settles downward in the water column.
History
SharkDefense made the discovery of electrochemical shark repellent effects on May 1, 2006 at South Bimini, Bahamas at the Bimini Biological Field Station. An electropositive metal, which was a component of a permanent magnet, was chosen as an experimental control for a tonic immobility experiment by Eric Stroud using a juvenile lemon shark (Negaprion brevirostris). It was anticipated that this metal would produce no effect, since it was not ferromagnetic. However, a violent rousing response was observed when the metal was brought within 50 cm of the shark's nose. The experiment was repeated with three other juvenile lemon sharks and two other juvenile nurse sharks (Ginglymostoma cirratum), and care was taken to eliminate all stray metal objects in the testing site. Patrick Rice, Michael Herrmann, and Eric Stroud were present at this first trial. Mike Rowe, from Discovery Channel’s Dirty Jobs series, subsequently witnessed and participated in a test using an electropositive metal within 24 hours after the discovery.
In the next three months, a variety of transition metals, lanthanides, post-transition metals,
Document 4:::
Artemia is a genus of aquatic crustaceans also known as brine shrimp. It is the only genus in the family Artemiidae. The first historical record of the existence of Artemia dates back to the first half of the 10th century AD from Lake Urmia, Iran, with an example called by an Iranian geographer an "aquatic dog", although the first unambiguous record is the report and drawings made by Schlösser in 1757 of animals from Lymington, England. Artemia populations are found worldwide, typically in inland saltwater lakes, but occasionally in oceans. Artemia are able to avoid cohabiting with most types of predators, such as fish, by their ability to live in waters of very high salinity (up to 25%).
The ability of the Artemia to produce dormant eggs, known as cysts, has led to extensive use of Artemia in aquaculture. The cysts may be stored indefinitely and hatched on demand to provide a convenient form of live feed for larval fish and crustaceans. Nauplii of the brine shrimp Artemia constitute the most widely used food item, and over of dry Artemia cysts are marketed worldwide annually. In addition, the resilience of Artemia makes them ideal animals running biological toxicity assays and it has become a model organism used to test the toxicity of chemicals. Breeds of Artemia are sold as novelty gifts under the marketing name Sea-Monkeys.
Description
The brine shrimp Artemia comprises a group of seven to nine species very likely to have diverged from an ancestral form living in the Mediterranean area about , around the time of the Messinian salinity crisis.
The Laboratory of Aquaculture & Artemia Reference Center at Ghent University possesses the largest known Artemia cyst collection, a cyst bank containing over 1,700 Artemia population samples collected from different locations around the world.
Artemia is a typical primitive arthropod with a segmented body to which is attached broad leaf-like appendages. The body usually consists of 19 segments, the first 11 of which ha
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do sharks use to secrete salt to assist in osmoregulation?
A. ceramic gland
B. rectal gland
C. blood gland
D. bladder gland
Answer:
|
|
sciq-4766
|
multiple_choice
|
What type of animal eats both plants and animals?
|
[
"insectivores",
"herbivores",
"omnivores",
"carnivores"
] |
C
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
A herbivore is an animal anatomically and physiologically adapted to eating plant material, for example foliage or marine algae, for the main component of its diet. As a result of their plant diet, herbivorous animals typically have mouthparts adapted to rasping or grinding. Horses and other herbivores have wide flat teeth that are adapted to grinding grass, tree bark, and other tough plant material.
A large percentage of herbivores have mutualistic gut flora that help them digest plant matter, which is more difficult to digest than animal prey. This flora is made up of cellulose-digesting protozoans or bacteria.
Etymology
Herbivore is the anglicized form of a modern Latin coinage, herbivora, cited in Charles Lyell's 1830 Principles of Geology. Richard Owen employed the anglicized term in an 1854 work on fossil teeth and skeletons. Herbivora is derived from Latin herba 'small plant, herb' and vora, from vorare 'to eat, devour'.
Definition and related terms
Herbivory is a form of consumption in which an organism principally eats autotrophs such as plants, algae and photosynthesizing bacteria. More generally, organisms that feed on autotrophs in general are known as primary consumers.
Herbivory is usually limited to animals that eat plants. Insect herbivory can cause a variety of physical and metabolic alterations in the way the host plant interacts with itself and other surrounding biotic factors. Fungi, bacteria, and protists that feed on living plants are usually termed plant pathogens (plant diseases), while fungi and microbes that feed on dead plants are described as saprotrophs. Flowering plants that obtain nutrition from other living plants are usually termed parasitic plants. There is, however, no single exclusive and definitive ecological classification of consumption patterns; each textbook has its own variations on the theme.
Evolution of herbivory
The understanding of herbivory in geological time comes from three sources: fossilized plants, which may
Document 2:::
A graminivore is a herbivorous animal that feeds primarily on grass, specifically "true" grasses, plants of the family Poaceae (also known as Graminae). Graminivory is a form of grazing. These herbivorous animals have digestive systems that are adapted to digest large amounts of cellulose, which is abundant in fibrous plant matter and more difficult to break down for many other animals. As such, they have specialized enzymes to aid in digestion and in some cases symbiotic bacteria that live in their digestive track and "assist" with the digestive process through fermentation as the matter travels through the intestines.
Horses, cattle, geese, guinea pigs, hippopotamuses, capybara and giant pandas are examples of vertebrate graminivores. Some carnivorous vertebrates, such as dogs and cats, are known to eat grass occasionally. Grass consumption in dogs can be a way to rid their intestinal tract of parasites that may be threatening to the carnivore's health. Various invertebrates also have graminivorous diets. Many grasshoppers, such as individuals from the family Acrididae, have diets consisting primarily of plants from the family Poaceae. Although humans are not graminivores, we do get much of our nutrition from a type of grass called cereal, and especially from the fruit of that grass which is called grain.
Graminivores generally exhibit a preference on which species of grass they choose to consume. For example, according to a study done on North American bison feeding on shortgrass plains in north-eastern Colorado, the cattle consumed a total of thirty-six different species of plant. Of that thirty-six, five grass species were favoured and consumed the most pervasively. The average consumption of these five species comprised about 80% of their diet. A few of these species include Aristida longiseta, Muhlenbergia species, and Bouteloua gracilis.
Document 3:::
Grazing is a method of feeding in which a herbivore feeds on low-growing plants such as grasses or other multicellular organisms, such as algae. Many species of animals can be said to be grazers, from large animals such as hippopotamuses to small aquatic snails. Grazing behaviour is a type of feeding strategy within the ecology of a species. Specific grazing strategies include graminivory (eating grasses); coprophagy (producing part-digested pellets which are reingested); pseudoruminant (having a multi-chambered stomach but not chewing the cud); and grazing on plants other than grass, such as on marine algae.
Grazing's ecological effects can include redistributing nutrients, keeping grasslands open or favouring a particular species over another.
Ecology
Many small selective herbivores follow larger grazers which skim off the highest, tough growth of grasses, exposing tender shoots. For terrestrial animals, grazing is normally distinguished from browsing in that grazing is eating grass or forbs, whereas browsing is eating woody twigs and leaves from trees and shrubs. Grazing differs from predation because the organism being grazed upon may not be killed. It differs from parasitism because the two organisms live together in a constant state of physical externality (i.e. low intimacy). Water animals that feed by rasping algae and other micro-organisms from stones are called grazers–scrapers.
Graminivory
Graminivory is a form of grazing involving feeding primarily on grass (specifically "true" grasses in the Poaceae). Horses, cattle, capybara, hippopotamuses, grasshoppers, geese, and giant pandas are graminivores. Giant pandas (Ailuropoda melanoleuca) are obligate bamboo grazers, 99% of their diet consisting of sub-alpine bamboo species.
Coprophagy
Rabbits are herbivores that feed by grazing on grass, forbs, and leafy weeds. They graze heavily and rapidly for about the first half-hour of a grazing period (usually in the late afternoon), followed by about half an
Document 4:::
The Jarman–Bell principle is a concept in ecology that the food quality of a herbivore's intake decreases as the size of the herbivore increases, but the amount of such food increases to counteract the low quality foods. It operates by observing the allometric (non- linear scaling) properties of herbivores. The principle was coined by P.J Jarman (1968.) and R.H.V Bell (1971).
Large herbivores can subsist on low quality food. Their gut size is larger than smaller herbivores. The increased size allows for better digestive efficiency, and thus allow viable consumption of low quality food. Small herbivores require more energy per unit of body mass compared to large herbivores. A smaller size, thus smaller gut size and lower efficiency, imply that these animals need to select high quality food to function. Their small gut limits the amount of space for food, so they eat low quantities of high quality diet. Some animals practice coprophagy, where they ingest fecal matter to recycle untapped/ undigested nutrients.
However, the Jarman–Bell principle is not without exception. Small herbivorous members of mammals, birds and reptiles were observed to be inconsistent with the trend of small body mass being linked with high-quality food. There have also been disputes over the mechanism behind the Jarman–Bell principle; that larger body sizes does not increase digestive efficiency.
The implications of larger herbivores ably subsisting on poor quality food compared smaller herbivores mean that the Jarman–Bell principle may contribute evidence for Cope's rule. Furthermore, the Jarman–Bell principle is also important by providing evidence for the ecological framework of "resource partitioning, competition, habitat use and species packing in environments" and has been applied in several studies.
Links with allometry
Allometry refers to the non-linear scaling factor of one variable with respect to another. The relationship between such variables is expressed as a power law, wher
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of animal eats both plants and animals?
A. insectivores
B. herbivores
C. omnivores
D. carnivores
Answer:
|
|
sciq-149
|
multiple_choice
|
What science includes many fields of science related to our home planet?
|
[
"meteorology",
"biology",
"zoology",
"earth science"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 2:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper.
Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences.
Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island).
See also
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What science includes many fields of science related to our home planet?
A. meteorology
B. biology
C. zoology
D. earth science
Answer:
|
|
sciq-4505
|
multiple_choice
|
Echinoderms are found in many different ocean environments, but most are found where?
|
[
"in tidepools",
"in beaches",
"in reefs",
"in waterfalls"
] |
C
|
Relavent Documents:
Document 0:::
Edward Brinton (January 12, 1924 – January 13, 2010) was a professor of oceanography and research biologist. His particular area of expertise was Euphausiids or krill, small shrimp-like creatures found in all the oceans of the world.
Early life
Brinton was born on January 12, 1924, in Richmond, Indiana to a Quaker couple, Howard Brinton and Anna Shipley Cox Brinton. Much of his childhood was spent on the grounds of Mills College where his mother was Dean of Faculty and his father was a professor. The family later moved to the Pendle Hill Quaker Center for Study and Contemplation, in Pennsylvania where his father and mother became directors.
Academic career
Brinton attended High School at Westtown School in Chester County, Pennsylvania. He studied at Haverford College and graduated in 1949 with a bachelor's degree in biology. He enrolled at Scripps Institution of Oceanography as a graduate student in 1950 and was awarded a Ph.D. in 1957. He continued on as a research biologist in the Marine Life Research Group, part of the CalCOFI program. He soon turned his dissertation into a major publication, The Distribution of Pacific Euphausiids. In this large monograph, he laid out the major biogeographic provinces of the Pacific (and part of the Atlantic), large-scale patterns of pelagic diversity and one of the most rational hypotheses for the mechanism of sympatric, oceanic speciation. In all of these studies the role of physical oceanography and circulation played a prominent part. His work has since been validated by others and continues, to this day, to form the basis for our attempts to understand large-scale pelagic ecology and the role of physics of the movement of water in the regulation of pelagic ecosystems. In addition to these studies he has led in the studies of how climatic variations have led to the large variations in the California Current, and its populations and communities. He has described several new species and, in collaboration with Margaret K
Document 1:::
Espegrend (also known as Espeland) is a marine biological field station located in Bergen, Norway. The station is located close to the airport Flesland, 20 kilometers south of Bergen.
Overview
The Department of Biological Sciences at the University of Bergen has specialized laboratories and research installations in the main campus in downtown Bergen. It is also responsible for the Marine biological field station at Espeland. The Station is located in the Raunefjord, with deep sea fauna easily available. The station has good mesocosm facilities, a research vessel RV Aurelia, and good facilities for benthic and planktonic sampling. Espegrend has a number of specialised facilities. It is well known for is mesocosm facility. Espegrend has very good access to diverse and well described marine habitats and model environments. The station comprises a boarding house, boats, laboratories and basic equipment for marine research.
Document 2:::
One of the marine ecosystems found in the Virgin Islands are the coral reefs. These coral reefs can be located between the islands of St. Croix, St. Thomas, and St. John. These coral reefs have an area of 297.9 km2, along with other marine habitats that are in between. The way these coral reefs grow are by coral larvae swimming freely and attaching themselves to hard surfaces around the islands and start to develop a skeleton on the outside of their skin to protect themselves from predators but also allow a new place for other coral larvae to attach to and grow on. These corals can form into three different structures; fringing reefs, which are reefs that are close to the shore, barrier reefs, which are reefs that are alongside the shore and is separated by deep water, and an atoll reef which is a coral reef that circles a lagoon or body of water.
Distribution
As stated, the coral reefs such as fringing reefs, deep reefs, patch reefs and spur and groove formation are distributed over three islands in the Virgin Islands which are St. Croix (Salt River Bay National Historical Park and Ecological Preserve, Buck Island Reef National Monument), St. Thomas, and St. John (Virgin Islands Coral Reef National Monument). The coral reefs found offshore of St. Thomas and St. John are distributed patchily around the islands. Additionally, a developed barrier reef system surrounds St. Croix along its eastern and southern shores.
Ecology
The coral reefs as well as hard-bottom habitat accounts for 297.9 km2. The coral reefs are home to diverse species. There are over 40 species of scleractinian corals and three species of Millepora. Live scleractinian species are found throughout the Virgin Islands, but mainly around Buck Island, St. Croix and St. John. More specifically based on a survey from 2001-2006, listed are a total of 215 fishes from St. John and 202 from St. Croix. Four species of sea turtles are found within the Virgin Islands. The coral reefs are impacted by freshwa
Document 3:::
Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety.
Education and training
According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians.
The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment.
As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines.
Relationship to commerce
Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED
Document 4:::
The Centre for Environment, Fisheries and Aquaculture Science (Cefas) is an executive agency of the United Kingdom government Department for Environment, Food and Rural Affairs (Defra). It carries out a wide range of research, advisory, consultancy, monitoring and training activities for a large number of customers around the world.
Cefas employs over 550 staff based primarily at two specialist laboratories within the UK, with additional staff based at small, port-based offices in Scarborough, Hayle, and Plymouth. In 2014 Cefas established a permanent base in the Middle East by opening an office in Kuwait, and since opened an office in Oman. They also operate an ocean-going research vessel Cefas Endeavour.
Customers
The primary customer for Cefas is their parent organisation Defra. They also undertake work for international and UK government departments (central and local), the World Bank, the European Commission, the United Nations Food and Agriculture Organization (FAO), commercial organisations, non-governmental and environmental organisations, regulators and enforcement agencies, local authorities and other public bodies.
There is an increasing focus on commercial research and consultancy as the level of funding available from Defra gradually reduces.
History
Known previously as the Directorate of Fisheries Research, the name and status was changed in 1997 to 'Centre for Environment, Fisheries and Aquaculture Science' (Cefas). At this time it became an Executive Agency of what was then the Ministry of Agriculture, Fisheries and Food (United Kingdom) and is now the Department of Environment, Food and Rural Affairs (Defra).
Lowestoft laboratory
In 1902, the Marine Biological Association opened a sub-station in Pakefield a suburb of Lowestoft, Suffolk to research the Fishing industry. This was part of the UK contribution to the newly created International Council for the Exploration of the Sea (ICES). By 1921 the station had been expanded to include a labo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Echinoderms are found in many different ocean environments, but most are found where?
A. in tidepools
B. in beaches
C. in reefs
D. in waterfalls
Answer:
|
|
sciq-6266
|
multiple_choice
|
What two activities are especially important when a number of local populations are linked, forming a metapopulation?
|
[
"immigration and emigration",
"flow and emigration",
"immigration and family reunions",
"family reunions and emigration"
] |
A
|
Relavent Documents:
Document 0:::
International migration occurs when people cross state boundaries and stay in the host state for some minimum length of the time. Migration occurs for many reasons. Many people leave their home countries in order to look for economic opportunities in another country. Others migrate to be with family members who have migrated or because of political conditions in their countries. Education is another reason for international migration, as students pursue their studies abroad, although this migration is often temporary, with a return to the home country after the studies are completed.
Categories of migrants
While there are several different potential systems for categorising international migrants, one system organizes them into nine groups:
temporary labor migrants
irregular, illegal, or undocumented migrants
highly skilled and business migrants
refugees
asylum seekers
forced migration
family members
return migrants
long-term, low-skilled migrants
These migrants can also be divided into two large groups, permanent and temporary. Permanent migrants intend to establish their permanent residence in a new country and possibly obtain that country's citizenship. Temporary migrants intend only to stay for a limited periods of time, perhaps until the end of a particular program of study or for the duration of a their work contract or a certain work season. Both types of migrants have a significant effect on the economies and societies of the chosen destination country and the country of origin.
Countries receiving migrants
Countries which receive migrants have been grouped by academics into four categories: traditional settlement countries, European countries which encouraged labour migration after World War II, European countries which receive a significant portion of their immigrant populations from their former colonies, and countries which formerly were points of emigration but have recently emerged as immigrant destinations. These countries are grouped according t
Document 1:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 2:::
Mobilities is a contemporary paradigm in the social sciences that explores the movement of people (human migration, individual mobility, travel, transport), ideas (see e.g. meme) and things (transport), as well as the broader social implications of those movements. Mobility can also be thought as the movement of people through social classes, social mobility or income, income mobility.
A mobility "turn" (or transformation) in the social sciences began in the 1990s in response to the increasing realization of the historic and contemporary importance of movement on individuals and society. This turn has been driven by generally increased levels of mobility and new forms of mobility where bodies combine with information and different patterns of mobility. The mobilities paradigm incorporates new ways of theorizing about how these mobilities lie "at the center of constellations of power, the creation of identities and the microgeographies of everyday life." (Cresswell, 2011, 551)
The mobility turn arose as a response to the way in which the social sciences had traditionally been static, seeing movement as a black box and ignoring or trivializing "the importance of the systematic movements of people for work and family life, for leisure and pleasure, and for politics and protest" (Sheller and Urry, 2006, 208). Mobilities emerged as a critique of contradictory orientations toward both sedentarism and deterritorialisation in social science. People had often been seen as static entities tied to specific places, or as nomadic and placeless in a frenetic and globalized existence. Mobilities looks at movements and the forces that drive, constrain and are produced by those movements.
Several typologies have been formulated to clarify the wide variety of mobilities. Most notably, John Urry divides mobilities into five types: mobility of objects, corporeal mobility, imaginative mobility, virtual mobility and communicative mobility. Later, Leopoldina Fortunati and Sakari Taipa
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two activities are especially important when a number of local populations are linked, forming a metapopulation?
A. immigration and emigration
B. flow and emigration
C. immigration and family reunions
D. family reunions and emigration
Answer:
|
|
ai2_arc-177
|
multiple_choice
|
Eyeglasses have two arms called temples attached to the eye lenses by very small hinges. Which of these functions like the hinges on eyeglasses?
|
[
"knee",
"fingers",
"neck vertebrae",
"base of the thumb"
] |
A
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Instruments used in Anatomy dissections are as follows:
Instrument list
Image gallery
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The squamous part of temporal bone, or temporal squama, forms the front and upper part of the temporal bone, and is scale-like, thin, and translucent.
Surfaces
Its outer surface is smooth and convex; it affords attachment to the temporal muscle, and forms part of the temporal fossa; on its hinder part is a vertical groove for the middle temporal artery. A curved line, the temporal line, or supramastoid crest, runs backward and upward across its posterior part; it serves for the attachment of the temporal fascia, and limits the origin of the temporalis muscle. The boundary between the squamous part and the mastoid portion of the bone, as indicated by traces of the original suture, lies about 1 cm. below this line.
Projecting from the lower part of the squamous part is a long, arched process, the zygomatic process. This process is at first directed lateralward, its two surfaces looking upward and downward; it then appears as if twisted inward upon itself, and runs forward, its surfaces now looking medialward and lateralward. The superior border is long, thin, and sharp, and serves for the attachment of the temporal fascia; the inferior, short, thick, and arched, has attached to it some fibers of the masseter. The lateral surface is convex and subcutaneous; the medial is concave, and affords attachment to the masseter. The anterior end is deeply serrated and articulates with the zygomatic bone. The posterior end is connected to the squamous part by two roots, the anterior and posterior roots. The posterior root, a prolongation of the upper border, is strongly marked; it runs backward above the external auditory meatus, and is continuous with the temporal line. The anterior root, continuous with the lower border, is short but broad and strong; it is directed medialward and ends in a rounded eminence, the articular tubercle (eminentia articularis).
This tubercle forms the front boundary of the mandibular fossa, and in the fresh state is covered with cartilage. In fro
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Eyeglasses have two arms called temples attached to the eye lenses by very small hinges. Which of these functions like the hinges on eyeglasses?
A. knee
B. fingers
C. neck vertebrae
D. base of the thumb
Answer:
|
|
sciq-11546
|
multiple_choice
|
What are the two most common causes of diseases?
|
[
"bacteria and viruses",
"nutritional deficiencies",
"bacteria and protazoa",
"viruses and protazoa"
] |
A
|
Relavent Documents:
Document 0:::
In biology, a pathogen (, "suffering", "passion" and , "producer of"), in the oldest and broadest sense, is any organism or agent that can produce disease. A pathogen may also be referred to as an infectious agent, or simply a germ.
The term pathogen came into use in the 1880s. Typically, the term pathogen is used to describe an infectious microorganism or agent, such as a virus, bacterium, protozoan, prion, viroid, or fungus. Small animals, such as helminths and insects, can also cause or transmit disease. However, these animals are usually referred to as parasites rather than pathogens. The scientific study of microscopic organisms, including microscopic pathogenic organisms, is called microbiology, while parasitology refers to the scientific study of parasites and the organisms that host them.
There are several pathways through which pathogens can invade a host. The principal pathways have different episodic time frames, but soil has the longest or most persistent potential for harboring a pathogen.
Diseases in humans that are caused by infectious agents are known as pathogenic diseases. Not all diseases are caused by pathogens, such as black lung from exposure to the pollutant coal dust, genetic disorders like sickle cell disease, and autoimmune diseases like lupus.
Pathogenicity
Pathogenicity is the potential disease-causing capacity of pathogens, involving a combination of infectivity (pathogen's ability to infect hosts) and virulence (severity of host disease). Koch's postulates are used to establish causal relationships between microbial pathogens and diseases. Whereas meningitis can be caused by a variety of bacterial, viral, fungal, and parasitic pathogens, cholera is only caused by some strains of Vibrio cholerae. Additionally, some pathogens may only cause disease in hosts with an immunodeficiency. These opportunistic infections often involve hospital-acquired infections among patients already combating another condition.
Infectivity involves path
Document 1:::
Infections associated with diseases are those infections that are associated with possible infectious etiologies that meet the requirements of Koch's postulates. Other methods of causation are described by the Bradford Hill criteria and evidence-based medicine.
Koch's postulates have been modified by some epidemiologists, based on the sequence-based detection of distinctive pathogenic nucleic acid sequences in tissue samples. When using this method, absolute statements regarding causation are not always possible. Higher amounts of distinctive pathogenic nucleic acid sequences should be in those exhibiting disease, compared to controls. In addition, the DNA load should become lower with the resolution of the disease. The distinctive pathogenic nucleic acid sequences load should also increase upon recurrence.
Other conditions are met to establish cause or association including studies in disease transmission. This means that there should be a high disease occurrence in those carrying a pathogen, evidence of a serological response to the pathogen, and the success of vaccination prevention. Direct visualization of the pathogen, the identification of different strains, immunological responses in the host, how the infection is spread and, the combination of these should all be taken into account to determine the probability that an infectious agent is the cause of the disease. A conclusive determination of a causal role of an infectious agent for in a particular disease using Koch's postulates is desired yet this might not be possible.
The leading cause of death worldwide is cardiovascular disease, but infectious diseases are the second leading cause of death worldwide and the leading cause of death in infants and children.
Other causes
Other causes or associations of disease are: a compromised immune system, environmental toxins, radiation exposure, diet and other lifestyle choices, stress, and genetics. Diseases may also be multifactorial, requiring multiple factor
Document 2:::
Cause, also known as etiology () and aetiology, is the reason or origination of something.
The word etiology is derived from the Greek , aitiologia, "giving a reason for" (, aitia, "cause"; and , -logia).
Description
In medicine, etiology refers to the cause or causes of diseases or pathologies. Where no etiology can be ascertained, the disorder is said to be idiopathic.
Traditional accounts of the causes of disease may point to the "evil eye".
The Ancient Roman scholar Marcus Terentius Varro put forward early ideas about microorganisms in a 1st-century BC book titled On Agriculture.
Medieval thinking on the etiology of disease showed the influence of Galen and of Hippocrates. Medieval European doctors generally held the view that disease was related to the air and adopted a miasmatic approach to disease etiology.
Etiological discovery in medicine has a history in Robert Koch's demonstration that species of the pathogenic bacteria Mycobacterium tuberculosis causes the disease tuberculosis; Bacillus anthracis causes anthrax, and Vibrio cholerae causes cholera. This line of thinking and evidence is summarized in Koch's postulates. But proof of causation in infectious diseases is limited to individual cases that provide experimental evidence of etiology.
In epidemiology, several lines of evidence together are required to for causal inference. Austin Bradford Hill demonstrated a causal relationship between tobacco smoking and lung cancer, and summarized the line of reasoning in the Bradford Hill criteria, a group of nine principles to establish epidemiological causation. This idea of causality was later used in a proposal for a Unified concept of causation.
Disease causative agent
The infectious diseases are caused by infectious agents or pathogens. The infectious agents that cause disease fall into five groups: viruses, bacteria, fungi, protozoa, and helminths (worms).
The term can also refer to a toxin or toxic chemical that causes illness.
Chain of causatio
Document 3:::
In pathology, pathogenesis is the process by which a disease or disorder develops. It can include factors which contribute not only to the onset of the disease or disorder, but also to its progression and maintenance. The word comes .
Description
Types of pathogenesis include microbial infection, inflammation, malignancy and tissue breakdown. For example, bacterial pathogenesis is the process by which bacteria cause infectious illness.
Most diseases are caused by multiple processes. For example, certain cancers arise from dysfunction of the immune system (skin tumors and lymphoma after a renal transplant, which requires immunosuppression), Streptococcus pneumoniae is spread through contact with respiratory secretions, such as saliva, mucus, or cough droplets from an infected person and colonizes the upper respiratory tract and begins to multiply.
The pathogenic mechanisms of a disease (or condition) are set in motion by the underlying causes, which if controlled would allow the disease to be prevented. Often, a potential cause is identified by epidemiological observations before a pathological link can be drawn between the cause and the disease. The pathological perspective can be directly integrated into an epidemiological approach in the interdisciplinary field of molecular pathological epidemiology. Molecular pathological epidemiology can help to assess pathogenesis and causality by means of linking a potential risk factor to molecular pathologic signatures of a disease. Thus, the molecular pathological epidemiology paradigm can advance the area of causal inference.
See also
Causal inference
Epidemiology
Molecular pathological epidemiology
Molecular pathology
Pathology
Pathophysiology
Salutogenesis
Document 4:::
Globalization, the flow of information, goods, capital, and people across political and geographic boundaries, allows infectious diseases to rapidly spread around the world, while also allowing the alleviation of factors such as hunger and poverty, which are key determinants of global health. The spread of diseases across wide geographic scales has increased through history. Early diseases that spread from Asia to Europe were bubonic plague, influenza of various types, and similar infectious diseases.
In the current era of globalization, the world is more interdependent than at any other time. Efficient and inexpensive transportation has left few places inaccessible, and increased global trade in agricultural products has brought more and more people into contact with animal diseases that have subsequently jumped species barriers (see zoonosis).
Globalization intensified during the Age of Exploration, but trading routes had long been established between Asia and Europe, along which diseases were also transmitted. An increase in travel has helped spread diseases to natives of lands who had not previously been exposed. When a native population is infected with a new disease, where they have not developed antibodies through generations of previous exposure, the new disease tends to run rampant within the population.
Etiology, the modern branch of science that deals with the causes of infectious disease, recognizes five major modes of disease transmission: airborne, waterborne, bloodborne, by direct contact, and through vector (insects or other creatures that carry germs from one species to another). As humans began traveling over seas and across lands which were previously isolated, research suggests that diseases have been spread by all five transmission modes.
Travel patterns and globalization
The Age of Exploration generally refers to the period between the 15th and 17th centuries. During this time, technological advances in shipbuilding and navigation made it e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the two most common causes of diseases?
A. bacteria and viruses
B. nutritional deficiencies
C. bacteria and protazoa
D. viruses and protazoa
Answer:
|
|
sciq-10229
|
multiple_choice
|
What is the term for the visible part of the electromagnetic spectrum?
|
[
"light",
"gravity",
"chroma",
"electricity"
] |
A
|
Relavent Documents:
Document 0:::
In the physical sciences, the term spectrum was introduced first into optics by Isaac Newton in the 17th century, referring to the range of colors observed when white light was dispersed through a prism.
Soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a spectral density plot.
Later it expanded to apply to other waves, such as sound waves and sea waves that could also be measured as a function of frequency (e.g., noise spectrum, sea wave spectrum). It has also been expanded to more abstract "signals", whose power spectrum can be analyzed and processed. The term now applies to any signal that can be measured or decomposed along a continuous variable, such as energy in electron spectroscopy or mass-to-charge ratio in mass spectrometry. Spectrum is also used to refer to a graphical representation of the signal as a function of the dependent variable.
Etymology
Electromagnetic spectrum
Electromagnetic spectrum refers to the full range of all frequencies of electromagnetic radiation and also to the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. Devices used to measure an electromagnetic spectrum are called spectrograph or spectrometer. The visible spectrum is the part of the electromagnetic spectrum that can be seen by the human eye. The wavelength of visible light ranges from 390 to 700 nm. The absorption spectrum of a chemical element or chemical compound is the spectrum of frequencies or wavelengths of incident radiation that are absorbed by the compound due to electron transitions from a lower to a higher energy state. The emission spectrum refers to the spectrum of radiation emitted by the compound due to electron transitions from a higher to a lower energy state.
Light from many different sources contains various colors, each with its own brightness or intensity. A rainbow, or prism, sends these component colors in different direction
Document 1:::
The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters.
The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere.
See also
Electronic filter — examples of transmission characteristics of electronic filters
Document 2:::
Cosmic ray visual phenomena, or light flashes (LF), also known as Astronaut's Eye, are spontaneous flashes of light visually perceived by some astronauts outside the magnetosphere of the Earth, such as during the Apollo program. While LF may be the result of actual photons of visible light being sensed by the retina, the LF discussed here could also pertain to phosphenes, which are sensations of light produced by the activation of neurons along the visual pathway.
Possible causes
Researchers believe that the LF perceived specifically by astronauts in space are due to cosmic rays (high-energy charged particles from beyond the Earth's atmosphere), though the exact mechanism is unknown. Hypotheses include Cherenkov radiation created as the cosmic ray particles pass through the vitreous humour of the astronauts' eyes, direct interaction with the optic nerve, direct interaction with visual centres in the brain, retinal receptor stimulation, and a more general interaction of the retina with radiation.
Conditions under which the light flashes were reported
Astronauts who had recently returned from space missions to the Hubble Space Telescope, the International Space Station and Mir Space Station reported seeing the LF under different conditions. In order of decreasing frequency of reporting in a survey, they saw the LF in the dark, in dim light, in bright light and one reported that he saw them regardless of light level and light adaptation. They were seen mainly before sleeping.
Types
Some LF were reported to be clearly visible, while others were not. They manifested in different colors and shapes. How often each type was seen varied across astronauts' experiences, as evident in a survey of 59 astronauts.
Colors
On Lunar missions, astronauts almost always reported that the flashes were white, with one exception where the astronaut observed "blue with a white cast, like a blue diamond." On other space missions, astronauts reported seeing other colors such as yellow and
Document 3:::
In physics, monochromatic radiation is electromagnetic radiation with a single constant frequency. When that frequency is part of the visible spectrum (or near it) the term monochromatic light is often used. Monochromatic light is perceived by the human eye as a spectral color.
When monochromatic radiation propagates through vacuum or a homogeneous transparent medium, it has a single constant wavelength.
Practical monochromaticity
No radiation can be totally monochromatic, since that would require a wave of infinite duration as a consequence of the Fourier transform's localization property (cf. spectral coherence). In practice, "monochromatic" radiation — even from lasers or spectral lines — always consists of components with a range of frequencies of non-zero width.
Generation
Monochromatic radiation can be produced by a number of methods. Isaac Newton observed that a beam of light from the sun could be spread out by refraction into a fan of light with varying colors; and that if a beam of any particular color was isolated from that fan, it behaved as "pure" light that could not be decomposed further.
When atoms of a chemical element in gaseous state are subjected to an electric current, to suitable radiation, or to high enough temperature, they emit a light spectrum with a set of discrete spectral lines (monochromatic components), that are characteristic of the element. This phenomenon is the basis of the science of spectroscopy, and is exploited in fluorescent lamps and the so-called neon signs.
A laser is a device that generates monochromatic and coherent radiation through a process of stimulated emission.
Properties and uses
When monochromatic radiation is made to interfere with itself, the result can be visible and stable interference fringes that can be used to measure very small distances, or large distances with very high accuracy. The current definition of the metre is based on this technique.
In the technique of spectroscopic analysis, a mat
Document 4:::
Optical radiation is part of the electromagnetic spectrum. It is a type of non-ionising radiation (NIR), with electromagnetic fields (EMFs).
Types
Optical radiation may be distinguished in:
artificial optical radiation: produced by artificial sources, including coherent sources (lasers) and non-coherent sources (i.e. all the other artificial sources, such as UV lights, common light bulbs, radiant heaters, welding equipment, etc.).
natural optical radiation: produced by the sun (that is a non-coherent source).
It is subdivided into ultraviolet radiation (UV), the spectrum of light visible for man (VIS) and infrared radiation (IR). It ranges between wavelengths of 100 nm to 1 mm. Electromagnetic waves in this range obey the laws of optics – they can be focused and refracted with lenses, for example.
Effects
Exposure to optical radiation can result in negative health effects. All wavelengths across this range of the spectrum, from UV to IR, can produce thermal injury to the surface layers of the skin, including the eye. When it comes from natural sources, this sort of thermal injury might be called a sunburn. However, thermal injury from infrared radiation could also occur in a workplace, such as a foundry, where such radiation is generated by industrial processes. At the other end of this range, UV light has enough photon energy that it can cause direct effects to protein structure in tissues, and is well established as carcinogenic in humans. Occupational exposures to UV light occur in welding and brazing operations, for example.
Excessive exposure to natural or artificial UV-radiation means immediate (acute) and long-term (chronic) damage to the eye and skin. Occupational exposure limits may be one of two types: rate limited or dose limited. Rate limits characterize the exposure based on effective energy (radiance or irradiance, depending on the type of radiation and the health effect of concern) per area per time, and dose limits characterize the exp
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the visible part of the electromagnetic spectrum?
A. light
B. gravity
C. chroma
D. electricity
Answer:
|
|
sciq-2942
|
multiple_choice
|
What is used to measure air pressure?
|
[
"metrometer",
"barometer",
"thermometer",
"indicator"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Temperature measurement (also known as thermometry) describes the process of measuring a current local temperature for immediate or later evaluation. Datasets consisting of repeated standardized measurements can be used to assess temperature trends.
History
Attempts at standardized temperature measurement prior to the 17th century were crude at best. For instance in 170 AD, physician Claudius Galenus mixed equal portions of ice and boiling water to create a "neutral" temperature standard. The modern scientific field has its origins in the works by Florentine scientists in the 1600s including Galileo constructing devices able to measure relative change in temperature, but subject also to confounding with atmospheric pressure changes. These early devices were called thermoscopes. The first sealed thermometer was constructed in 1654 by the Grand Duke of Tuscany, Ferdinand II. The development of today's thermometers and temperature scales began in the early 18th century, when Gabriel Fahrenheit produced a mercury thermometer and scale, both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use, alongside the Celsius and Kelvin scales.
Technologies
Many methods have been developed for measuring temperature. Most of these rely on measuring some physical property of a working material that varies with temperature. One of the most common devices for measuring temperature is the glass thermometer. This consists of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature increase causes the fluid to expand, so the temperature can be determined by measuring the volume of the fluid. Such thermometers are usually calibrated so that one can read the temperature simply by observing the level of the fluid in the thermometer. Another type of thermometer that is not really used much in practice, but is important from a theoretical standpoint, is the gas thermometer.
Other important devices for measuring temperature inc
Document 3:::
An item bank Or Question Bank is a term for a repository of test items that belong to a testing program, as well as all information pertaining to those items. In most applications of testing and assessment, the items are of multiple choice format, but any format can be used. Items are pulled from the bank and assigned to test forms for publication either as a paper-and-pencil test or some form of e-assessment.
Types of information
An item bank will not only include the text of each item, but also extensive information regarding test development and psychometric characteristics of the items. Examples of such information include:
Item author
Date written
Item status (e.g., new, pilot, active, retired)
Angoff ratings
Correct answer
Item format
Classical test theory statistics
Item response theory statistics
Linkage to test blueprint
Item history (e.g., usage date(s) and reviews)
User-defined fields
In India the Popular Question Bank is Oswaal Question Bank which covers All Indian Board And Competitive Exam Such as CBSE,CISCE,Pre-university course- State board and JEE,NEET,CLAT and CUET
Item banking software
Because an item bank is essentially a simple database, it can be stored in database software or even a spreadsheet such as Microsoft Excel. However, there are several dozen commercially-available software programs specifically designed for item banking. The advantages that these provide are related to assessment. For example, items are presented on the computer screen as they would appear to a test examinee, and item response theory parameters can be translated into item response functions or information functions. Additionally, there are functionalities for publication, such as formatting a set of items to be printed as a paper-and-pencil test.
Some item banks also have test administration functionalities, such as being able to deliver e-assessment or process "bubble" answer sheets.
Document 4:::
Flight instruments are the instruments in the cockpit of an aircraft that provide the pilot with data about the flight situation of that aircraft, such as altitude, airspeed, vertical speed, heading and much more other crucial information in flight. They improve safety by allowing the pilot to fly the aircraft in level flight, and make turns, without a reference outside the aircraft such as the horizon. Visual flight rules (VFR) require an airspeed indicator, an altimeter, and a compass or other suitable magnetic direction indicator. Instrument flight rules (IFR) additionally require a gyroscopic pitch-bank (artificial horizon), direction (directional gyro) and rate of turn indicator, plus a slip-skid indicator, adjustable altimeter, and a clock. Flight into instrument meteorological conditions (IMC) require radio navigation instruments for precise takeoffs and landings.
The term is sometimes used loosely as a synonym for cockpit instruments as a whole, in which context it can include engine instruments, navigational and communication equipment. Many modern aircraft have electronic flight instrument systems.
Most regulated aircraft have these flight instruments as dictated by the US Code of Federal Regulations, Title 14, Part 91. They are grouped according to pitot-static system, compass systems, and gyroscopic instruments.
Pitot-static systems
Instruments which are pitot-static systems use air pressure differences to determine speed and altitude.
Altimeter
The altimeter shows the aircraft's altitude above sea-level by measuring the difference between the pressure in a stack of aneroid capsules inside the altimeter and the atmospheric pressure obtained through the static system. The most common unit for altimeter calibration worldwide is hectopascals (hPa), except for North America and Japan where inches of mercury (inHg) are used. The altimeter is adjustable for local barometric pressure which must be set correctly to obtain accurate altitude readings, usu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is used to measure air pressure?
A. metrometer
B. barometer
C. thermometer
D. indicator
Answer:
|
|
sciq-8598
|
multiple_choice
|
The right side of the heart collects what type of blood from the body?
|
[
"dioxide-poor",
"oxygen-poor",
"oxygen-rich",
"potassium-rich"
] |
B
|
Relavent Documents:
Document 0:::
The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit.
The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation.
The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins.
A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung.
Structure
De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery.
Lungs
The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart.
Veins
Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the
Document 1:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 2:::
In cardiovascular physiology, stroke volume (SV) is the volume of blood pumped from the left ventricle per beat. Stroke volume is calculated using measurements of ventricle volumes from an echocardiogram and subtracting the volume of the blood in the ventricle at the end of a beat (called end-systolic volume) from the volume of blood just prior to the beat (called end-diastolic volume). The term stroke volume can apply to each of the two ventricles of the heart, although it usually refers to the left ventricle. The stroke volumes for each ventricle are generally equal, both being approximately 70 mL in a healthy 70-kg man.
Stroke volume is an important determinant of cardiac output, which is the product of stroke volume and heart rate, and is also used to calculate ejection fraction, which is stroke volume divided by end-diastolic volume. Because stroke volume decreases in certain conditions and disease states, stroke volume itself correlates with cardiac function.
Calculation
Its value is obtained by subtracting end-systolic volume (ESV) from end-diastolic volume (EDV) for a given ventricle.
In a healthy 70-kg man, ESV is approximately 50 mL and EDV is approximately 120mL, giving a difference of 70 mL for the stroke volume.
Stroke work refers to the work, or pressure of the blood ("P") multiplied by the stroke volume.
ESV and EDV are fixed variables. Heart rate and Stroke volume are unfixed.
Determinants
Men, on average, have higher stroke volumes than women due to the larger size of their hearts. However, stroke volume depends on several factors such as heart size, contractility, duration of contraction, preload (end-diastolic volume), and afterload. Corresponding to the oxygen uptake, women's need for blood flow does not decrease and a higher cardiac frequency makes up for their smaller stroke volume.
Exercise
Prolonged aerobic exercise training may also increase stroke volume, which frequently results in a lower (resting) heart rate. Reduced heart rat
Document 3:::
The right atrioventricular orifice (right atrioventricular opening) is the large oval aperture of communication between the right atrium and ventricle in the heart.
Situated at the base of the atrium, it measures about 3.8 to 4 cm. in diameter and is surrounded by a fibrous ring, covered by the lining membrane of the heart; it is considerably larger than the corresponding aperture on the left side, being sufficient to admit the ends of four fingers.
It is guarded by the tricuspid valve.
See also
Left atrioventricular orifice
Document 4:::
A ventricle is one of two large chambers toward the bottom of the heart that collect and expel blood towards the peripheral beds within the body and lungs. The blood pumped by a ventricle is supplied by an atrium, an adjacent chamber in the upper heart that is smaller than a ventricle. Interventricular means between the ventricles (for example the interventricular septum), while intraventricular means within one ventricle (for example an intraventricular block).
In a four-chambered heart, such as that in humans, there are two ventricles that operate in a double circulatory system: the right ventricle pumps blood into the pulmonary circulation to the lungs, and the left ventricle pumps blood into the systemic circulation through the aorta.
Structure
Ventricles have thicker walls than atria and generate higher blood pressures. The physiological load on the ventricles requiring pumping of blood throughout the body and lungs is much greater than the pressure generated by the atria to fill the ventricles. Further, the left ventricle has thicker walls than the right because it needs to pump blood to most of the body while the right ventricle fills only the lungs.
On the inner walls of the ventricles are irregular muscular columns called trabeculae carneae which cover all of the inner ventricular surfaces except that of the conus arteriosus, in the right ventricle. There are three types of these muscles. The third type, the papillary muscles, give origin at their apices to the chordae tendinae which attach to the cusps of the tricuspid valve and to the mitral valve.
The mass of the left ventricle, as estimated by magnetic resonance imaging, averages 143 g ± 38.4 g, with a range of 87–224 g.
The right ventricle is equal in size to the left ventricle and contains roughly 85 millilitres (3 imp fl oz; 3 US fl oz) in the adult. Its upper front surface is circled and convex, and forms much of the sternocostal surface of the heart. Its under surface is flattened, forming pa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The right side of the heart collects what type of blood from the body?
A. dioxide-poor
B. oxygen-poor
C. oxygen-rich
D. potassium-rich
Answer:
|
|
ai2_arc-928
|
multiple_choice
|
A negative effect of the invention and use of paper is the
|
[
"increased use of glass bottles.",
"increased number of trees cut down.",
"decreased pollution in trash dumps.",
"decreased amount of books to read."
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
An item bank Or Question Bank is a term for a repository of test items that belong to a testing program, as well as all information pertaining to those items. In most applications of testing and assessment, the items are of multiple choice format, but any format can be used. Items are pulled from the bank and assigned to test forms for publication either as a paper-and-pencil test or some form of e-assessment.
Types of information
An item bank will not only include the text of each item, but also extensive information regarding test development and psychometric characteristics of the items. Examples of such information include:
Item author
Date written
Item status (e.g., new, pilot, active, retired)
Angoff ratings
Correct answer
Item format
Classical test theory statistics
Item response theory statistics
Linkage to test blueprint
Item history (e.g., usage date(s) and reviews)
User-defined fields
In India the Popular Question Bank is Oswaal Question Bank which covers All Indian Board And Competitive Exam Such as CBSE,CISCE,Pre-university course- State board and JEE,NEET,CLAT and CUET
Item banking software
Because an item bank is essentially a simple database, it can be stored in database software or even a spreadsheet such as Microsoft Excel. However, there are several dozen commercially-available software programs specifically designed for item banking. The advantages that these provide are related to assessment. For example, items are presented on the computer screen as they would appear to a test examinee, and item response theory parameters can be translated into item response functions or information functions. Additionally, there are functionalities for publication, such as formatting a set of items to be printed as a paper-and-pencil test.
Some item banks also have test administration functionalities, such as being able to deliver e-assessment or process "bubble" answer sheets.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A negative effect of the invention and use of paper is the
A. increased use of glass bottles.
B. increased number of trees cut down.
C. decreased pollution in trash dumps.
D. decreased amount of books to read.
Answer:
|
|
sciq-7430
|
multiple_choice
|
The “knee-jerk" motion that people involuntarily perform after being struck in the knee in a certain way is an example of what kind of behavior?
|
[
"sensor",
"reaction",
"spasm",
"reflex"
] |
D
|
Relavent Documents:
Document 0:::
Biological motion is motion that comes from actions of a biological organism. Humans and animals are able to understand those actions through experience, identification, and higher level neural processing. Humans use biological motion to identify and understand familiar actions, which is involved in the neural processes for empathy, communication, and understanding other's intentions. The neural network for biological motion is highly sensitive to the observer's prior experience with the action's biological motions, allowing for embodied learning. This is related to a research field that is broadly known as embodied cognitive science, along with research on mirror neurons.
For instance, a well known example of sensitiveness to a specific type of biological motion is expert dancers observing others dancing. Compared to people who do not know how to dance, expert dancers show more sensitiveness to the biological motion from the dance style of their expertise. The same expert dancer would also show similar but less sensitivity to dance styles outside of their expertise. The differences in perception of dance motions suggests that the ability to perceive and understand biological motion is strongly influenced by the observer's experience with the action. A similar expertise effect has been observed in different types of action, such as music making, language, scientific thinking, basketball, and walking.
History
The phenomenon of human sensitivity to biological motion was first documented by Swedish perceptual psychologist, Gunnar Johansson, in 1973. He is best known for his experiments that used point light displays (PLDs). Johansson attached light bulbs to body parts and joints of actors performing various actions in the dark. He filmed these actions, yielding point lights from each bulb moving on a black background. Johansson found that people were able to recognize what the actors were doing when the PLD was moving, but not when it was stationary. Johansson's inve
Document 1:::
In biology, a reflex, or reflex action, is an involuntary, unplanned sequence or action and nearly instantaneous response to a stimulus.
Reflexes are found with varying levels of complexity in organisms with a nervous system. A reflex occurs via neural pathways in the nervous system called reflex arcs. A stimulus initiates a neural signal, which is carried to a synapse. The signal is then transferred across the synapse to a motor neuron, which evokes a target response. These neural signals do not always travel to the brain, so many reflexes are an automatic response to a stimulus that does not receive or need conscious thought.
Many reflexes are fine-tuned to increase organism survival and self-defense. This is observed in reflexes such as the startle reflex, which provides an automatic response to an unexpected stimulus, and the feline righting reflex, which reorients a cat's body when falling to ensure safe landing. The simplest type of reflex, a short-latency reflex, has a single synapse, or junction, in the signaling pathway. Long-latency reflexes produce nerve signals that are transduced across multiple synapses before generating the reflex response.
Types of human reflexes
Myotatic reflexes
The myotatic or muscle stretch reflexes (sometimes known as deep tendon reflexes) provide information on the integrity of the central nervous system and peripheral nervous system. This information can be detected using electromyography (EMG). Generally, decreased reflexes indicate a peripheral problem, and lively or exaggerated reflexes a central one. A stretch reflex is the contraction of a muscle in response to its lengthwise stretch.
Biceps reflex (C5, C6)
Brachioradialis reflex (C5, C6, C7)
Extensor digitorum reflex (C6, C7)
Triceps reflex (C6, C7, C8)
Patellar reflex or knee-jerk reflex (L2, L3, L4)
Ankle jerk reflex (Achilles reflex) (S1, S2)
While the reflexes above are stimulated mechanically, the term H-reflex refers to the analogous reflex stimulated
Document 2:::
Psychomotor learning is the relationship between cognitive functions and physical movement. Psychomotor learning is demonstrated by physical skills such as movement, coordination, manipulation, dexterity, grace, strength, speed—actions which demonstrate the fine or gross motor skills, such as use of precision instruments or tools, and walking. Sports and dance are the richest realms of gross psychomotor skills.
Behavioral examples include driving a car, throwing a ball, and playing a musical instrument. In psychomotor learning research, attention is given to the learning of coordinated activity involving the arms, hands, fingers, and feet, while verbal processes are not emphasized.
Stages of psychomotor development
According to Paul Fitts and Michael Posner's three-stage model, when learning psychomotor skills, individuals progress through the cognitive stages, the associative stage, and the autonomic stage. The cognitive stage is marked by awkward slow and choppy movements that the learner tries to control. The learner has to think about each movement before attempting it. In the associative stage, the learner spends less time thinking about every detail, however, the movements are still not a permanent part of the brain. In the autonomic stage, the learner can refine the skill through practice, but no longer needs to think about the movement.
Factors affecting psychomotor skills
Psychological feedback
Amount of practice
Task complexity
Work distribution
Motive-incentive conditions
Environmental factors
How motor behaviors are recorded
The motor cortices are involved in the formation and retention of memories and skills. When an individual learns physical movements, this leads to changes in the motor cortex. The more practiced a movement is, the stronger the neural encoding becomes. A study cited how the cortical areas include neurons that process movements and that these neurons change their behavior during and after being exposed to tasks. Psychomotor le
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The three-axis acceleration switch is a micromachined microelectromechanical systems (MEMS) sensor that detects whether an acceleration event has exceeded a predefined threshold. It is a small, compact device, only 5mm by 5mm, and measures acceleration in the x, y, and z axes. It was developed by the Army Research Laboratory for the purposes of traumatic brain injury (TBI) research and was first introduced in 2012 at the 25th International Conference on Micro Electro Mechanical Systems (MEMS).
The three-axis acceleration switch was designed to obtain acceleration data more effectively than a conventional accelerometer in order to more accurately characterize the forces and shocks responsible for TBI. While miniature accelerometers require a constant power draw, the three-axis acceleration switch only draws current when it senses an acceleration event, using up less energy and allowing the use of smaller batteries. The three-axis acceleration switch has shown to exhibit an expected battery lifetime that is about 100 times better than that of a digital accelerometer. In return, however, the acceleration switch has a lower resolution than that of a digital or analog accelerometer.
One potential application of the three-axis acceleration switch is in studying the head impacts of players in high-risk contact sports. Due to the size of conventional accelerometers, measuring the acceleration requires the device to be implemented inside the player's helmet, which is designed to mitigate the collision forces and thus may not accurately reflect the true level of injury potential. In contrast, the miniature nature of the acceleration switch makes it easier for the switch to be affixed directly onto the participant's head.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The “knee-jerk" motion that people involuntarily perform after being struck in the knee in a certain way is an example of what kind of behavior?
A. sensor
B. reaction
C. spasm
D. reflex
Answer:
|
|
sciq-1278
|
multiple_choice
|
The length that an object has travelled in one or multiple directions can also be called what?
|
[
"velocity",
"distance",
"range",
"axis"
] |
B
|
Relavent Documents:
Document 0:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 1:::
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies.
Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration.
Constant velocity vs acceleration
To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed.
For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration.
Difference between speed and velocity
While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction.
Equation of motion
Average velocity
Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Sinuosity, sinuosity index, or sinuosity coefficient of a continuously differentiable curve having at least one inflection point is the ratio of the curvilinear length (along the curve) and the Euclidean distance (straight line) between the end points of the curve. This dimensionless quantity can also be rephrased as the "actual path length" divided by the "shortest path length" of a curve.
The value ranges from 1 (case of straight line) to infinity (case of a closed loop, where the shortest path length is zero or for an infinitely-long actual path).
Interpretation
The curve must be continuous (no jump) between the two ends. The sinuosity value is really significant when the line is continuously differentiable (no angular point). The distance between both ends can also be evaluated by a plurality of segments according to a broken line passing through the successive inflection points (sinuosity of order 2).
The calculation of the sinuosity is valid in a 3-dimensional space (e.g. for the central axis of the small intestine), although it is often performed in a plane (with then a possible orthogonal projection of the curve in the selected plan; "classic" sinuosity on the horizontal plane, longitudinal profile sinuosity on the vertical plane).
The classification of a sinuosity (e.g. strong / weak) often depends on the cartographic scale of the curve (see the coastline paradox for further details) and of the object velocity which flowing therethrough (river, avalanche, car, bicycle, bobsleigh, skier, high speed train, etc.): the sinuosity of the same curved line could be considered very strong for a high speed train but low for a river. Nevertheless, it is possible to see a very strong sinuosity in the succession of few river bends, or of laces on some mountain roads.
Notable values
The sinuosity S of:
2 inverted continuous semicircles located in the same plane is . It is independent of the circle radius;
a sine function (over a whole number n of half-periods), wh
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The length that an object has travelled in one or multiple directions can also be called what?
A. velocity
B. distance
C. range
D. axis
Answer:
|
|
ai2_arc-1080
|
multiple_choice
|
Where will a sidewalk feel hottest on a warm, clear day?
|
[
"Under a picnic table",
"In direct sunlight",
"Under a puddle",
"In the shade"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Phoenix Union Bioscience High School is part of the Phoenix Union High School District, with campus in downtown Phoenix, Arizona, US. The school specialises in science education. A new building was constructed and the existing one renovated, opening in the fall of 2007.
Enrollment
Bioscience hosts approximately 180 freshmen through seniors. The first class of 43 students graduated from Bioscience in May 2010. 97 percent of its 10th graders passed the AIMS Math exam (in 2009), the highest public (non-charter) school percentage in the Valley, and No. 2 in the state. Their science scores were No. 3 in the state among non-charter schools.
In its first year of eligibility, Bioscience earned the maximum "Excelling" Achievement Profile from the State.
Campus
The US$10 million campus which opened in October 2007 is located in Phoenix's downtown Biotechnology Center and open to students throughout the District. The Bioscience High School campus, which was designed by The Orcutt-Winslow Partnership won the American School Board Journal's Learning By Design 2009 Grand Prize Award. The school received this award for its classrooms, collaborative learning spaces, and smooth circulation.
Phoenix Union High School District received a $2.4 million small schools grant from the City of Phoenix to renovate Bioscience's existing historic McKinley building for a Bio-medical program. It includes administrative office, four classrooms, a library/community room and a student demonstration area.
In 2014, Bioscience ranked number 27 on the Best Education Degrees Web site's "Most Amazing High School Campuses In The World" list, ranked by their modern designs. The school has a solar charging station, and is partially powered by solar panels.
Document 2:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where will a sidewalk feel hottest on a warm, clear day?
A. Under a picnic table
B. In direct sunlight
C. Under a puddle
D. In the shade
Answer:
|
|
sciq-5182
|
multiple_choice
|
Bone marrow is found inside many bones and produces what?
|
[
"tumors",
"lymphocytes",
"sugar",
"apoptosis"
] |
B
|
Relavent Documents:
Document 0:::
Bone marrow is a semi-solid tissue found within the spongy (also known as cancellous) portions of bones. In birds and mammals, bone marrow is the primary site of new blood cell production (or haematopoiesis). It is composed of hematopoietic cells, marrow adipose tissue, and supportive stromal cells. In adult humans, bone marrow is primarily located in the ribs, vertebrae, sternum, and bones of the pelvis. Bone marrow comprises approximately 5% of total body mass in healthy adult humans, such that a man weighing 73 kg (161 lbs) will have around 3.7 kg (8 lbs) of bone marrow.
Human marrow produces approximately 500 billion blood cells per day, which join the systemic circulation via permeable vasculature sinusoids within the medullary cavity. All types of hematopoietic cells, including both myeloid and lymphoid lineages, are created in bone marrow; however, lymphoid cells must migrate to other lymphoid organs (e.g. thymus) in order to complete maturation.
Bone marrow transplants can be conducted to treat severe diseases of the bone marrow, including certain forms of cancer such as leukemia. Several types of stem cells are related to bone marrow. Hematopoietic stem cells in the bone marrow can give rise to hematopoietic lineage cells, and mesenchymal stem cells, which can be isolated from the primary culture of bone marrow stroma, can give rise to bone, adipose, and cartilage tissue.
Structure
The composition of marrow is dynamic, as the mixture of cellular and non-cellular components (connective tissue) shifts with age and in response to systemic factors. In humans, marrow is colloquially characterized as "red" or "yellow" marrow (, , respectively) depending on the prevalence of hematopoietic cells vs fat cells. While the precise mechanisms underlying marrow regulation are not understood, compositional changes occur according to stereotypical patterns. For example, a newborn baby's bones exclusively contain hematopoietically active "red" marrow, and there is a pro
Document 1:::
In haematology atypical localization of immature precursors (ALIP) refers to finding of atypically localized precursors (myeloblasts and promyelocytes) on bone marrow biopsy. In healthy humans, precursors are rare and are found localized near the endosteum, and consist of 1-2 cells. In some cases of myelodysplastic syndromes, immature precursors might be located in the intertrabecular region and occasionally aggregate as clusters of 3 ~ 5 cells. The presence of ALIPs is associated with worse prognosis of MDS . Recently, in bone marrow sections of patients with acute myeloid leukemia cells similar to ALIPs were defined as ALIP-like clusters. The presence of ALIP-like clusters in AML patients within remission was reported to be associated with early relapse of the disease.
Document 2:::
Megakaryocyte–erythroid progenitor cells, among other blood cells, are generated as a result of hematopoiesis, which occurs in the bone marrow. Hematopoietic stem cells can differentiate into one of two progenitor cells: the common lymphoid progenitor and the common myeloid progenitor. MEPs derive from the common myeloid progenitor lineage. Megakaryocyte/erythrocyte progenitor cells must commit to becoming either platelet-producing megakaryocytes via megakaryopoiesis or erythrocyte-producing erythroblasts via erythropoiesis. Most of the blood cells produced in the bone marrow during hematopoiesis come from megakaryocyte/erythrocyte progenitor cells.
Document 3:::
Hematopoietic stem cells (HSCs) have high regenerative potentials and are capable of differentiating into all blood and immune system cells. Despite this impressive potential, HSCs have limited potential to produce more multipotent stem cells. This limited self-renewal potential is protected through maintenance of a quiescent state in HSCs. Stem cells maintained in this quiescent state are known as long term HSCs (LT-HSCs). During quiescence, HSCs maintain a low level of metabolic activity and do not divide. LT-HSCs can be signaled to proliferate, producing either myeloid or lymphoid progenitors. Production of these progenitors does not come without a cost: When grown under laboratory conditions that induce proliferation, HSCs lose their ability to divide and produce new progenitors. Therefore, understanding the pathways that maintain proliferative or quiescent states in HSCs could reveal novel pathways to improve existing therapeutics involving HSCs.
Background
All adult stem cells can undergo two types of division: symmetric and asymmetric. When a cell undergoes symmetric division, it can either produce two differentiated cells or two new stem cells. When a cell undergoes asymmetric division, it produces one stem and one differentiated cell. Production of new stem cells is necessary to maintain this population within the body. Like all cells, hematopoietic stem cells undergo metabolic shifts to meet their bioenergetic needs throughout development. These metabolic shifts play an important role in signaling, generating biomass, and protecting the cell from damage. Metabolic shifts also guide development in HSCs and are one key factor in determining if an HSC will remain quiescent, symmetrically divide, or asymmetrically divide. As mentioned above, quiescent cells maintain a low level of oxidative phosphorylation and primarily rely on glycolysis to generate energy. Fatty acid beta-oxidation has been shown to influence fate decisions in HSCs. In contrast, proliferat
Document 4:::
Myeloid tissue, in the bone marrow sense of the word myeloid (myelo- + -oid), is tissue of bone marrow, of bone marrow cell lineage, or resembling bone marrow, and myelogenous tissue (myelo- + -genous) is any tissue of, or arising from, bone marrow; in these senses the terms are usually used synonymously, as for example with chronic myeloid/myelogenous leukemia.
In hematopoiesis, myeloid cells, or myelogenous cells are blood cells that arise from a progenitor cell for granulocytes, monocytes, erythrocytes, or platelets (the common myeloid progenitor, that is, CMP or CFU-GEMM), or in a narrower sense also often used, specifically from the lineage of the myeloblast (the myelocytes, monocytes, and their daughter types). Thus, although all blood cells, even lymphocytes, are normally born in the bone marrow in adults, myeloid cells in the narrowest sense of the term can be distinguished from lymphoid cells, that is, lymphocytes, which come from common lymphoid progenitor cells that give rise to B cells and T cells. Those cells' differentiation (that is, lymphopoiesis) is not complete until they migrate to lymphatic organs such as the spleen and thymus for programming by antigen challenge. Thus, among leukocytes, the term myeloid is associated with the innate immune system, in contrast to lymphoid, which is associated with the adaptive immune system. Similarly, myelogenous usually refers to nonlymphocytic white blood cells, and erythroid can often be used to distinguish "erythrocyte-related" from that sense of myeloid and from lymphoid.
The word myelopoiesis has several senses in a way that parallels those of myeloid, and myelopoiesis in the narrower sense is the regulated formation specifically of myeloid leukocytes (myelocytes), allowing that sense of myelopoiesis to be contradistinguished from erythropoiesis and lymphopoiesis (even though all blood cells are normally produced in the marrow in adults).
Myeloid neoplasms always concern bone marrow cell lineage and ar
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Bone marrow is found inside many bones and produces what?
A. tumors
B. lymphocytes
C. sugar
D. apoptosis
Answer:
|
|
sciq-8556
|
multiple_choice
|
What colors in a neon sign represent real neon?
|
[
"red, orange",
"blue, green",
"purple, blue",
"yellow, white"
] |
A
|
Relavent Documents:
Document 0:::
Green Light, green light, green-light or greenlight may refer to:
Green-colored light, part of the visible spectrum
Arts, entertainment, and media
Films and television
Green Light (1937 film), starring Errol Flynn
Green Light (2002 film), a Turkish film written and directed by Faruk Aksoy
"Green Light" (Breaking Bad), a third-season episode of Breaking Bad
Greenlight, formal approval of a project to move forward
Literature
Green Light, a 1935 novel by Lloyd C. Douglas
"Green Light", the final passage of F. Scott Fitzgerald's novel The Great Gatsby
Greenlights (book), a 2020 book by Matthew McConaughey
Music
Albums
Green Light (Bonnie Raitt album), 1982
Green Light (Cliff Richard album), 1978
The Green Light, a 2009 mixtape by Bow Wow
Songs
"Green Light" (Cliff Richard song) (1979)
"Green Light" (Beyoncé song) (2006)
"Green Light" (John Legend song) (2008)
"Green Light" (Roll Deep song) (2010)
"Green Light" (Lorde song) (2017)
"Green Light" (Valery Leontiev song) (1984)
"Green Light", by the American Breed from Bend Me, Shape Me (1968)
"Green Light", by Girls' Generation from Lion Heart
"Green Light", by Hank Thompson (1954)
"Green Light", by Lil Durk from Love Songs 4 the Streets 2
"Green Light", by R. Kelly from Write Me Back
"Green Light", by Sonic Youth from Evol
"Green Light", by the Bicycles from Oh No, It's Love
"Green Lights", by Aloe Blacc (2011)
"Greenlight" (Pitbull song) (2016)
"Green Lights", by Sarah Jarosz from Undercurrent (2016)
"Green Light", by Kylie Minogue from Tension (2023)
"Greenlight", by 5 Seconds of Summer from 5 Seconds of Summer
"Greenlight", by Enisa Nikaj which represented New York in the American Song Contest
"Greenlights" (song), by Krewella
Computing and technology
Greenlight (Internet service), a fiber-optic Internet service provided by the city of Wilson, North Carolina, US
Greenlight Networks, a fiber-optic Internet service in Rochester, New York, US
Steam Greenlight, a service part of Val
Document 1:::
Mnemonics are used to help memorize the electronic color codes for resistors. Mnemonics describing specific and relatable scenarios are more memorable than abstract phrases.
Resistor color code
The first letter of the color code is matched by order of increasing magnitude. The electronic color codes, in order, are:
0 = Black
1 = Brown
2 = Red
3 = Orange
4 = Yellow
5 = Green
6 = Blue
7 = Violet
8 = Gray
9 = White.
Easy to remember
A mnemonic which includes color name(s) generally reduces the chances of confusing black and brown. Some mnemonics that are easy to remember:
Big Boys Race Our Young Girls But Violet Generally Wins.
Better Be Right Or Your Great Big Venture Goes West.
Beetle Bailey Runs Over Your General Before Very Good Witnesses.
Beach Bums Rarely Offer You Gatorade But Very Good Water.
Buster Brown Races Our Young Girls But Violet Generally Wins.
Better Be Right Or Your Great Big Vacation Goes Wrong.
Better Be Right Or Your Great Big Values Go Wrong.
Better Be Right Or Your Great Big Plan Goes Wrong. (with P = Purple for Violet)
Back-Breaking Rascals Often Yield Grudgingly But Virtuous Gentlemen Will Give Shelter Nobly. (with tolerance bands Gold, Silver or None)
Better Be Right Or Your Great Big Plan Goes Wrong - Go Start Now!
Black Beetles Running Over Your Garden Bring Very Grey Weather.
Bad Booze Rots Our Young Guts But Vodka Goes Well – get some now.
Bad Boys Run Over Yellow Gardenias Behind Victory Garden Walls.
Bat Brained Resistor Order You Gotta Be Very Good With.
Betty Brown Runs Over Your Garden But Violet Gingerly Walks.
Big Beautiful Roses Occupy Your Garden But Violets Grow Wild.
Big Brown Rabbits Often Yield Great Big Vocal Groans When Gingerly Slapped Needlessly.
Black Bananas Really Offend Your Girlfriend But Violets Get Welcomed.
Black Birds Run Over Your Gay Barely Visible Grey Worms.
Badly Burnt Resistors On Your Ground Bus Void General Warranty.
Billy Brown Ran Out Yelling Get
Document 2:::
Colored music notation is a technique used to facilitate enhanced learning in young music students by adding visual color to written musical notation. It is based upon the concept that color can affect the observer in various ways, and combines this with standard learning of basic notation.
Basis
Viewing color has been widely shown to change an individual's emotional state and stimulate neurons. The Lüscher color test observes from experiments that when individuals are required to contemplate pure red for varying lengths of time, [the experiments] have shown that this color decidedly has a stimulating effect on the nervous system; blood pressure increases, and respiration rate and heart rate both increase. Pure blue, on the other hand, has the reverse effect; observers experience a decline in blood pressure, heart rate, and breathing. Given these findings, it has been suggested that the influence of colored musical notation would be similar.
Music education
In music education, color is typically used in method books to highlight new material. Stimuli received through several senses excite more neurons in several localized areas of the cortex, thereby reinforcing the learning process and improving retention. This information has been proven by other researchers; Chute (1978) reported that "elementary students who viewed a colored version of an instructional film scored significantly higher on both immediate and delayed tests than did students who viewed a monochrome version".
Color studies
Effect on achievement
A researcher in this field, George L. Rogers is the Director of Music Education at Westfield State College. He is also the author of 25 articles in publications that include the Music Educators Journal, The Instrumentalist, and the Journal of Research in Music Education. In 1991, George L. Rogers did a study that researched the effect of color-coded notation on music achievement of elementary instrumental students. Rogers states that the color-co
Document 3:::
In the signage industry, neon signs are electric signs lighted by long luminous gas-discharge tubes that contain rarefied neon or other gases. They are the most common use for neon lighting, which was first demonstrated in a modern form in December 1910 by Georges Claude at the Paris Motor Show. While they are used worldwide, neon signs were popular in the United States from about the 1920s to 1950s. The installations in Times Square, many originally designed by Douglas Leigh, were famed, and there were nearly 2,000 small shops producing neon signs by 1940. In addition to signage, neon lighting is used frequently by artists and architects, and (in a modified form) in plasma display panels and televisions. The signage industry has declined in the past several decades, and cities are now concerned with preserving and restoring their antique neon signs.
Light emitting diode arrays can be formed to simulate the appearance of neon lamps.
History
The neon sign is an evolution of the earlier Geissler tube, which is a sealed glass tube containing a "rarefied" gas (the gas pressure in the tube is well below atmospheric pressure). When a voltage is applied to electrodes inserted through the glass, an electrical glow discharge results. Geissler tubes were popular in the late 19th century, and the different colors they emitted were characteristics of the gases within. They were unsuitable for general lighting, as the pressure of the gas inside typically declined with use. The direct predecessor of neon tube lighting was the Moore tube, which used nitrogen or carbon dioxide as the luminous gas and a patented mechanism for maintaining pressure. Moore tubes were sold for commercial lighting for a number of years in the early 1900s.
The discovery of neon in 1898 by British scientists William Ramsay and Morris W. Travers included the observation of a brilliant red glow in Geissler tubes. Travers wrote, "the blaze of crimson light from the tube told its own story and was a sigh
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What colors in a neon sign represent real neon?
A. red, orange
B. blue, green
C. purple, blue
D. yellow, white
Answer:
|
|
sciq-3544
|
multiple_choice
|
How do yeasts reproduce asexually?
|
[
"by synchronizing",
"by outcropping",
"by merging",
"by budding"
] |
D
|
Relavent Documents:
Document 0:::
The yeast Saccharomyces cerevisiae is a simple single-celled eukaryote with both a diploid and haploid mode of existence. The mating of yeast only occurs between haploids, which can be either the a or α (alpha) mating type and thus display simple sexual differentiation. Mating type is determined by a single locus, MAT, which in turn governs the sexual behaviour of both haploid and diploid cells. Through a form of genetic recombination, haploid yeast can switch mating type as often as every cell cycle.
Mating type and the life cycle of Saccharomyces cerevisiae
S. cerevisiae (yeast) can stably exist as either a diploid or a haploid. Both haploid and diploid yeast cells reproduce by mitosis, with daughter cells budding off of mother cells. Haploid cells are capable of mating with other haploid cells of the opposite mating type (an a cell can only mate with an α cell, and vice versa) to produce a stable diploid cell. Diploid cells, usually upon facing stressful conditions such as nutrient depletion, can undergo meiosis to produce four haploid spores: two a spores and two α spores.
Differences between a and α cells
a cells produce 'a-factor', a mating pheromone which signals the presence of an a cell to neighbouring α cells. a cells respond to α-factor, the α cell mating pheromone, by growing a projection (known as a shmoo, due to its distinctive shape resembling the Al Capp cartoon character Shmoo) towards the source of α-factor. Similarly, α cells produce α-factor, and respond to a-factor by growing a projection towards the source of the pheromone. The response of haploid cells only to the mating pheromones of the opposite mating type allows mating between a and α cells, but not between cells of the same mating type.
These phenotypic differences between a and α cells are due to a different set of genes being actively transcribed and repressed in cells of the two mating types. a cells activate genes which produce a-factor and produce a cell surface receptor (Ste2) w
Document 1:::
Mating types are the microorganism equivalent to sexes in multicellular lifeforms and are thought to be the ancestor to distinct sexes. They also occur in macro-organisms such as fungi.
Definition
Mating types are the microorganism equivalent to sex in higher organisms and occur in isogamous and anisogamous species. Depending on the group, different mating types are often referred to by numbers, letters, or simply "+" and "−" instead of "male" and "female", which refer to "sexes" or differences in size between gametes. Syngamy can only take place between gametes carrying different mating types.
Occurrence
Reproduction by mating types is especially prevalent in fungi. Filamentous ascomycetes usually have two mating types referred to as "MAT1-1" and "MAT1-2", following the yeast mating-type locus (MAT). Under standard nomenclature, MAT1-1 (which may informally be called MAT1) encodes for a regulatory protein with an alpha box motif, while MAT1-2 (informally called MAT2) encodes for a protein with a high motility-group (HMG) DNA-binding motif, as in the yeast mating type MATα1. The corresponding mating types in yeast, a non-filamentous ascomycete, are referred to as MATa and MATα.
Mating type genes in ascomycetes are called idiomorphs rather than alleles due to the uncertainty of the origin by common descent. The proteins they encode are transcription factors which regulate both the early and late stages of the sexual cycle. Heterothallic ascomycetes produce gametes, which present a single Mat idiomorph, and syngamy will only be possible between gametes carrying complementary mating types. On the other hand, homothallic ascomycetes produce gametes that can fuse with every other gamete in the population (including its own mitotic descendants) most often because each haploid contains the two alternate forms of the Mat locus in its genome.
Basidiomycetes can have thousands of different mating types.
In the ascomycete Neurospora crassa matings are restricted to intera
Document 2:::
Heterothallic species have sexes that reside in different individuals. The term is applied particularly to distinguish heterothallic fungi, which require two compatible partners to produce sexual spores, from homothallic ones, which are capable of sexual reproduction from a single organism.
In heterothallic fungi, two different individuals contribute nuclei to form a zygote. Examples of heterothallism are included for Saccharomyces cerevisiae, Aspergillus fumigatus, Aspergillus flavus, Penicillium marneffei and Neurospora crassa. The heterothallic life cycle of N. crassa is given in some detail, since similar life cycles are present in other heterothallic fungi.
Life cycle of Saccharomyces cerevisiae
The yeast Saccharomyces cerevisiae is heterothallic. This means that each yeast cell is of a certain mating type and can only mate with a cell of the other mating type. During vegetative growth that ordinarily occurs when nutrients are abundant, S. cerevisiae reproduces by mitosis as either haploid or diploid cells. However, when starved, diploid cells undergo meiosis to form haploid spores. Mating occurs when haploid cells of opposite mating type, MATa and MATα, come into contact. Ruderfer et al. pointed out that such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same ascus, the sac that contains the tetrad of cells directly produced by a single meiosis, and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type with which they may mate.
Katz Ezov et al. presented evidence that in natural S. cerevisiae populations clonal reproduction and a type of “self-fertilization” (in the form of intratetrad mating) predominate. Ruderfer et al. analyzed the ancestry of natural S. cerevisiae strains and concluded that outcrossing occurs only about once every
Document 3:::
The tetrad is the four spores produced after meiosis of a yeast or other Ascomycota, Chlamydomonas or other alga, or a plant. After parent haploids mate, they produce diploids. Under appropriate environmental conditions, diploids sporulate and undergo meiosis. The meiotic products, spores, remain packaged in the parental cell body to produce the tetrad.
Genetic typification
If the two parents have a mutation in two different genes, the tetrad can segregate these genes as the parental ditype (PD), the non-parental ditype (NPD) or as the tetratype (TT).
Parental ditype is a tetrad type containing two different genotypes, both of which are parental. A spore arrangement in ascomycetes that contains only the two non-recombinant-type ascospores.
Non-parental ditype (NPD) is a spore that contains only the two recombinant-type ascospores (assuming two segregating loci). A tetrad type containing two different genotypes, both of which are recombinant.
Tetratype is a tetrad containing four different genotypes, two parental and two recombinant. A spore arrangement in ascomycetes that consists of two parental and two recombinant spores indicates a single crossover between two linked loci.
Linkage analysis
The ratio between the different segregation types arising after the sporulation is a measure of the linkage between the two genes.
Tetrad dissection
Tetrad dissection has become a powerful tool of yeast geneticists, and is used in conjunction with the many established procedures utilizing the versatility of yeasts as model organisms. Use of modern microscopy and micromanipulation techniques allows the four haploid spores of a yeast tetrad to be separated and germinated individually to form isolated spore colonies.
Uses
Tetrad analysis can be used to confirm whether a phenotype is caused by a specific mutation, construction of strains, and for investigating gene interaction. Since the frequency of tetrad segregation types is influenced by the recombination frequency for
Document 4:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do yeasts reproduce asexually?
A. by synchronizing
B. by outcropping
C. by merging
D. by budding
Answer:
|
|
sciq-11676
|
multiple_choice
|
Terrestrial ecosystems, also known for their diversity, are grouped into large categories called what?
|
[
"monomes",
"bisomes",
"substrates",
"biomes"
] |
D
|
Relavent Documents:
Document 0:::
Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment.
Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity.
Impact
Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result.
Examples
Some examples of ecosystems that are rich in diversity are:
Deserts
Forests
Large marine ecosystems
Marine ecosystems
Old-growth forests
Rainforests
Tundra
Coral reefs
Marine
Ecosystem diversity as a result of evolutionary pressure
Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form
Document 1:::
Ecological classification or ecological typology is the classification of land or water into geographical units that represent variation in one or more ecological features. Traditional approaches focus on geology, topography, biogeography, soils, vegetation, climate conditions, living species, habitats, water resources, and sometimes also anthropic factors. Most approaches pursue the cartographical delineation or regionalisation of distinct areas for mapping and planning.
Approaches to classifications
Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines. Traditionally these approaches have focused on biotic components (vegetation classification), abiotic components (environmental approaches) or implied ecological and evolutionary processes (biogeographical approaches). Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy (ecotope).
Vegetation classification
Vegetation is often used to classify terrestrial ecological units. Vegetation classification can be based on vegetation structure and floristic composition. Classifications based entirely on vegetation structure overlap with land cover mapping categories.
Many schemes of vegetation classification are in use by the land, resource and environmental management agencies of different national and state jurisdictions. The International Vegetation Classification (IVC or EcoVeg) has been recently proposed but has not been yet widely adopted.
Vegetation classifications have limited use in aquatic systems, since only a handful of freshwater or marine habitats are dominated by plants (e.g. kelp forests or seagrass meadows). Also, some extreme terrestrial environments, like subterranean or cryogenic ecosystems, are not properly described in vegetation c
Document 2:::
In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate.
The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo
Document 3:::
Ecosystem Functional Type (EFT) is an ecological concept to characterize ecosystem functioning. Ecosystem Functional Types are defined as groups of ecosystems or patches of the land surface that share similar dynamics of matter and energy exchanges between the biota and the physical environment. The EFT concept is analogous to the Plant Functional Types (PFTs) concept, but defined at a higher level of the biological organization. As plant species can be grouped according to common functional characteristics, ecosystems can be grouped according to their common functional behavior.
One of the most used approaches to implement this concept has been the identification of EFTs from the satellite-derived dynamics of primary production, an essential and integrative descriptor of ecosystem functioning.
History
In 1992, Soriano and Paruelo proposed the concept of Biozones to identify vegetation units that share ecosystem functional characteristics using time-series of satellite images of spectral vegetation indices. Biozones were later renamed to EFTs by Paruelo et al. (2001), using an equivalent definition and methodology. was one of the first authors that used the term EFT as "aggregated components of ecosystems whose interactions with one another and with the environment produce differences in patterns of ecosystem structure and dynamics". Walker (1997) proposed the use of a similar term, vegetation functional types, for groups of PFTs in sets that constitute the different states of vegetation succession in non-equilibrium ecosystems. The same term was applied by Scholes et al. in a wider sense for those areas having similar ecological attributes, such as PFTs composition, structure, phenology, biomass or productivity. Several studies have applied hierarchy and patch dynamic theories for the definition of ecosystem and landscape functional types at different spatial scales, by scaling-up emergent structural and functional properties from patches to regions. Valentin
Document 4:::
There are 62 named Ecological Systems found in Montana These systems are described in the Montana Field Guides-Ecological Systems of Montana.
About
An ecosystem is a biological environment consisting of all the organisms living in a particular area, as well as all the nonliving, physical components of the environment with which the organisms interact, such as air, soil, water and sunlight. It is all the organisms in a given area, along with the nonliving (abiotic) factors with which they interact; a biological community and its physical environment. As stated in an article from Montana State University in their Institute on Ecosystems; "An ecosystem can be small, such as the area under a pine tree or a single hot spring in Yellowstone National Park, or it can be large, such as the Rocky Mountains, the rainforest or the Antarctic Ocean." The Montana Fish, Wildlife and Parks (FWP) have shared their views on Montana's Main Ecosystems as montane forest, intermountain grasslands, plains grasslands and shrub grasslands. The Montana Agricultural Experiment Station (MAES) categorized Montana's ecosystems based on the different rangelands. They have recognized 22 different ecosystems whereas the Montana Natural Heritage Program named 62 ecosystems for the entire state.
Forest and Woodland Systems
Northern Rocky Mountain Mesic Montane Mixed Conifer Forest
Rocky Mountain Subalpine Mesic Spruce-Fir Forest and Woodland
Northwestern Great Plains - Black Hills Ponderosa Pine Woodland and Savanna
Northern Rocky Mountain Dry-Mesic Montane Mixed Conifer Forest
Rocky Mountain Foothill Limber Pine - Juniper Woodland
Northern Rocky Mountain Foothill Conifer Wooded Steppe
Rocky Mountain Lodgepole Pine Forest
Middle Rocky Mountain Montane Douglas-Fir Forest and Woodland
Northern Rocky Mountain Ponderosa Pine Woodland and Savanna
Rocky Mountain Poor Site Lodgepole Pine Forest
Rocky Mountain Subalpine Dry-Mesic Spruce-Fir Forest and Woodland
Northern Rocky Mountain Subalpin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Terrestrial ecosystems, also known for their diversity, are grouped into large categories called what?
A. monomes
B. bisomes
C. substrates
D. biomes
Answer:
|
|
sciq-881
|
multiple_choice
|
What state of matter completes the list: solid, liquid, gas?
|
[
"plasma",
"ice",
"energy",
"power"
] |
A
|
Relavent Documents:
Document 0:::
States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions.
Low-energy states of matter
Classical states
Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other.
Amorphous solid: A solid in which there is no far-range order of the positions of the atoms.
Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order.
Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom.
Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern.
Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure.
Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order.
Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container.
Modern states
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 3:::
Solid is one of the four fundamental states of matter along with liquid, gas, and plasma. The molecules in a solid are closely packed together and contain the least amount of kinetic energy. A solid is characterized by structural rigidity (as in rigid bodies) and resistance to a force applied to the surface. Unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. The atoms in a solid are bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice), or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed with little pressure whereas gases can be compressed with little pressure because the molecules in a gas are loosely packed.
The branch of physics that deals with solids is called solid-state physics, and is the main branch of condensed matter physics (which also includes liquids). Materials science is primarily concerned with the physical and chemical properties of solids. Solid-state chemistry is especially concerned with the synthesis of novel materials, as well as the science of identification and chemical composition.
Microscopic description
The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals. In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites, whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline. Almost all common metals, and many ceramics, are polycrystalline.
In other materials, there is no long-range order in the
Document 4:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What state of matter completes the list: solid, liquid, gas?
A. plasma
B. ice
C. energy
D. power
Answer:
|
|
scienceQA-4498
|
multiple_choice
|
What do these two changes have in common?
an iceberg melting slowly
baking cookies
|
[
"Both are only physical changes.",
"Both are caused by heating.",
"Both are caused by cooling.",
"Both are chemical changes."
] |
B
|
Step 1: Think about each change.
An iceberg melting is a change of state. So, it is a physical change. An iceberg is made of frozen water. As it melts, the water changes from a solid to a liquid. But a different type of matter is not formed.
Baking cookies is a chemical change. The type of matter in the cookie dough changes when it is baked. The cookie dough turns into cookies!
Step 2: Look at each answer choice.
Both are only physical changes.
An iceberg melting is a physical change. But baking cookies is not.
Both are chemical changes.
Baking cookies is a chemical change. But an iceberg melting is not.
Both are caused by heating.
Both changes are caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 4:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
an iceberg melting slowly
baking cookies
A. Both are only physical changes.
B. Both are caused by heating.
C. Both are caused by cooling.
D. Both are chemical changes.
Answer:
|
sciq-10180
|
multiple_choice
|
What colorful arc-shaped atmospheric phenomena are produced by a combination of refraction and reflection?
|
[
"lightning",
"sunsets",
"rainbows",
"shadows"
] |
C
|
Relavent Documents:
Document 0:::
Atmospheric optical phenomena include:
Afterglow
Airglow
Alexander's band, the dark region between the two bows of a double rainbow.
Alpenglow
Anthelion
Anticrepuscular rays
Aurora
Auroral light (northern and southern lights, aurora borealis and aurora australis)
Belt of Venus
Brocken Spectre
Circumhorizontal arc
Circumzenithal arc
Cloud iridescence
Crepuscular rays
Earth's shadow
Earthquake lights
Glories
Green flash
Halos, of Sun or Moon, including sun dogs
Haze
Heiligenschein or halo effect, partly caused by the opposition effect
Ice blink
Light pillar
Lightning
Mirages (including Fata Morgana)
Monochrome Rainbow
Moon dog
Moonbow
Nacreous cloud/Polar stratospheric cloud
Rainbow
Subsun
Sun dog
Tangent arc
Tyndall effect
Upper-atmospheric lightning, including red sprites, Blue jets, and ELVES
Water sky
See also
Document 1:::
Atmospheric optics is "the study of the optical characteristics of the atmosphere or products of atmospheric processes .... [including] temporal and spatial resolutions beyond those discernible with the naked eye". Meteorological optics is "that part of atmospheric optics concerned with the study of patterns observable with the naked eye". Nevertheless, the two terms are sometimes used interchangeably.
Meteorological optical phenomena, as described in this article, are concerned with how the optical properties of Earth's atmosphere cause a wide range of optical phenomena and visual perception phenomena.
Examples of meteorological phenomena include:
The blue color of the sky. This is from Rayleigh scattering, which sends more higher frequency/shorter wavelength (blue) sunlight into the eye of an observer than other frequencies/wavelength.
The reddish color of the Sun when it is observed through a thick atmosphere, as during a sunrise or sunset. This is because long-wavelength (red) light is scattered less than blue light. The red light reaches the observer's eye, whereas the blue light is scattered out of the line of sight.
Other colours in the sky, such as glowing skies at dusk and dawn. These are from additional particulate matter in the sky that scatter different colors at different angles.
Halos, afterglows, coronas, polar stratospheric clouds, and sun dogs. These are from scattering, or refraction, by ice crystals and from other particles in the atmosphere. They depend on different particle sizes and geometries.
Mirages. These are optical phenomena in which light rays are bent due to thermal variations in the refractive index of air, producing displaced or heavily distorted images of distant objects. Other optical phenomena associated with this include the Novaya Zemlya effect, in which the Sun has a distorted shape and rises earlier or sets later than predicted. A spectacular form of refraction, called the Fata Morgana, occurs with a temperature inversion, in
Document 2:::
The circumzenithal arc, also called the circumzenith arc (CZA), upside-down rainbow, and the Bravais arc, is an optical phenomenon similar in appearance to a rainbow, but belonging to the family of halos arising from refraction of sunlight through ice crystals, generally in cirrus or cirrostratus clouds, rather than from raindrops. The arc is located at a considerable distance (approximately 46°) above the observed Sun and at most forms a quarter of a circle centered on the zenith. It has been called "a smile in the sky", its first impression being that of an upside-down rainbow. The CZA is one of the brightest and most colorful members of the halo family. Its colors, ranging from violet on top to red at the bottom, are purer than those of a rainbow because there is much less overlap in their formation.
The intensity distribution along the circumzenithal arc requires consideration of several effects: Fresnel's reflection and transmission amplitudes, atmospheric attenuation, chromatic dispersion (i.e. the width of the arc), azimuthal angular dispersion (ray bundling), and geometrical constraints. In effect, the CZA is brightest when the Sun is observed at about 20°.
Contrary to public awareness, the CZA is not a rare phenomenon, but it tends to be overlooked since it occurs so far overhead. It is worthwhile to look out for it when sun dogs are visible, since the same type of ice crystals that cause them (plate-shaped hexagonal prisms in horizontal orientation) are responsible for the CZA.
Formation
The light that forms the CZA enters an ice crystal through its flat top face, and exits through a side prism face. The refraction of almost parallel sunlight through what is essentially a 90-degree prism accounts for the wide color separation and the purity of color. The CZA can only form when the sun is at an altitude lower than 32.2°. The CZA is brightest when the sun is at 22° above the horizon, which causes sunlight to enter and exit the crystals at the minimum d
Document 3:::
Atmospheric optics ray tracing codes - this article list codes for light scattering using ray-tracing technique to study atmospheric optics phenomena such as rainbows and halos. Such particles can be large raindrops or hexagonal ice crystals. Such codes are one of many approaches to calculations of light scattering by particles.
Geometric optics (ray tracing)
Ray tracing techniques can be applied to study light scattering by spherical and non-spherical particles under the condition that the size of a particle is much larger than the wavelength of light. The light can be considered as collection of separate rays with width of rays much larger than the wavelength but smaller than a particle. Rays hitting the particle undergoes reflection, refraction and diffraction. These rays exit in various directions with different amplitudes and phases. Such ray tracing techniques are used to describe optical phenomena such as rainbow of halo on hexagonal ice crystals for large particles.
Review of several mathematical techniques is provided in series of publications.
The 46° halo was first explained as being caused by refractions through ice crystals in 1679 by the French physicist Edmé Mariotte (1620–1684) in terms of light refraction
Jacobowitz in 1971 was the first to apply the ray-tracing technique to hexagonal ice crystal. Wendling et al. (1979) extended Jacobowitz's work from hexagonal ice particle with infinite length to finite length and combined Monte Carlo technique to the ray-tracing simulations.
Classification
The compilation contains information about the electromagnetic scattering by hexagonal ice crystals, large raindrops, and relevant links and applications.
Codes for light scattering by hexagonal ice crystals
Relevant scattering codes
Discrete dipole approximation codes
Codes for electromagnetic scattering by cylinders
Codes for electromagnetic scattering by spheres
External links
Scatterlib - Google Code repository of light scattering codes
See
Document 4:::
Atmospheric diffraction is manifested in the following principal ways:
Optical atmospheric diffraction
Radio wave diffraction is the scattering of radio frequency or lower frequencies from the Earth's ionosphere, resulting in the ability to achieve greater distance radio broadcasting.
Sound wave diffraction is the bending of sound waves, as the sound travels around edges of geometric objects. This produces the effect of being able to hear even when the source is blocked by a solid object. The sound waves bend appreciably around the solid object.
However, if the object has a diameter greater than the acoustic wavelength, a 'sound shadow' is cast behind the object where the sound is inaudible. (Note: some sound may be propagated through the object depending on material).
Optical atmospheric diffraction
When light travels through thin clouds made up of nearly uniform sized water or aerosol droplets or ice crystals, diffraction or bending of light occurs as the light is diffracted by the edges of the particles. This degree of bending of light depends on the wavelength (color) of light and the size of the particles. The result is a pattern of rings, which seem to emanate from the Sun, the Moon, a planet, or another astronomical object. The most distinct part of this pattern is a central, nearly white disk. This resembles an atmospheric Airy disc but is not actually an Airy disk. It is different from rainbows and halos, which are mainly caused by refraction.
The left photo shows a diffraction ring around the rising Sun caused by a veil of aerosol. This effect dramatically disappeared when the Sun rose high enough until the pattern was no longer visible on the Earth's surface. This phenomenon is sometimes called the corona effect, not to be confused with the solar corona.
On the right is a 1/10-second exposure showing an overexposed full moon. The Moon is seen through thin vaporous clouds, which glow with a bright disk surrounded by an illuminated red ring. A
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What colorful arc-shaped atmospheric phenomena are produced by a combination of refraction and reflection?
A. lightning
B. sunsets
C. rainbows
D. shadows
Answer:
|
|
sciq-8414
|
multiple_choice
|
What bonds form when pairs of electrons are shared?
|
[
"covalent bonds",
"neutron bonds",
"ionized bonds",
"dissonance bonds"
] |
A
|
Relavent Documents:
Document 0:::
A bonding electron is an electron involved in chemical bonding. This can refer to:
Chemical bond, a lasting attraction between atoms, ions or molecules
Covalent bond or molecular bond, a sharing of electron pairs between atoms
Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule
Chemical bonding
Document 1:::
In chemistry, an electron pair or Lewis pair consists of two electrons that occupy the same molecular orbital but have opposite spins. Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.
Because electrons are fermions, the Pauli exclusion principle forbids these particles from having the same quantum numbers. Therefore, for two electrons to occupy the same orbital, and thereby have the same orbital quantum number, they must have different spin quantum number. This also limits the number of electrons in the same orbital to two.
The pairing of spins is often energetically favorable, and electron pairs therefore play a large role in chemistry. They can form a chemical bond between two atoms, or they can occur as a lone pair of valence electrons. They also fill the core levels of an atom.
Because the spins are paired, the magnetic moment of the electrons cancel one another, and the pair's contribution to magnetic properties is generally diamagnetic.
Although a strong tendency to pair off electrons can be observed in chemistry, it is also possible that electrons occur as unpaired electrons.
In the case of metallic bonding the magnetic moments also compensate to a large extent, but the bonding is more communal so that individual pairs of electrons cannot be distinguished and it is better to consider the electrons as a collective 'sea'.
A very special case of electron pair formation occurs in superconductivity: the formation of Cooper pairs. In unconventional superconductors, whose crystal structure contains copper anions, the electron pair bond is due to antiferromagnetic spin fluctuations.
See also
Electron pair production
Frustrated Lewis pair
Jemmis mno rules
Lewis acids and bases
Nucleophile
Polyhedral skeletal electron pair theory
Document 2:::
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
Document 3:::
A non-bonding electron is an electron not involved in chemical bonding. This can refer to:
Lone pair, with the electron localized on one atom.
Non-bonding orbital, with the electron delocalized throughout the molecule.
Chemical bonding
Document 4:::
Stannide ions,
Some examples of stannide Zintl ions are listed below. Some of them contain 2-centre 2-electron bonds (2c-2e), others are "electron deficient" and bonding sometimes can be described using polyhedral skeletal electron pair theory (Wade's rules) where the number of valence electrons contributed by each tin atom is considered to be 2 (the s electrons do not contribute). There are some examples of silicide and plumbide ions with similar structures, for example tetrahedral , the chain anion (Si2−)n, and .
Sn4− found for example in Mg2Sn.
, tetrahedral with 2c-2e bonds e.g. in CsSn.
, tetrahedral closo-cluster with 10 electrons (2n + 2).
(Sn2−)n zig-zag chain polymeric anion with 2c-2e bonds found for example in BaSn.
closo-
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What bonds form when pairs of electrons are shared?
A. covalent bonds
B. neutron bonds
C. ionized bonds
D. dissonance bonds
Answer:
|
|
ai2_arc-75
|
multiple_choice
|
Which property of a mineral can be determined just by looking at it?
|
[
"luster",
"mass",
"weight",
"hardness"
] |
A
|
Relavent Documents:
Document 0:::
Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals.
The following tests are used on specimen and thin sections through polarizing microscope.
Color
Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habitat
The shape of the crystal and habitat.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite).
Cleavage
Number, behaviour, size and way cracks fracture in the mineral.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste?
Bite Test
This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft).
Hardness
The Mohs Hardn
Document 1:::
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
Prehistory
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
Document 2:::
See also
List of minerals
Document 3:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
Document 4:::
Optical mineralogy is the study of minerals and rocks by measuring their optical properties. Most commonly, rock and mineral samples are prepared as thin sections or grain mounts for study in the laboratory with a petrographic microscope. Optical mineralogy is used to identify the mineralogical composition of geological materials in order to help reveal their origin and evolution.
Some of the properties and techniques used include:
Refractive index
Birefringence
Michel-Lévy Interference colour chart
Pleochroism
Extinction angle
Conoscopic interference pattern (Interference figure)
Becke line test
Optical relief
Sign of elongation (Length fast vs. length slow)
Wave plate
History
William Nicol, whose name is associated with the creation of the Nicol prism, is likely the first to prepare thin slices of mineral substances, and his methods were applied by Henry Thronton Maire Witham (1831) to the study of plant petrifactions. This method, of significant importance in petrology, was not at once made use of for the systematic investigation of rocks, and it was not until 1858 that Henry Clifton Sorby pointed out its value. Meanwhile, the optical study of sections of crystals had been advanced by Sir David Brewster and other physicists and mineralogists and it only remained to apply their methods to the minerals visible in rock sections.
Sections
A rock-section should be about one-thousandth of an inch (30 micrometres) in thickness, and is relatively easy to make. A thin splinter of the rock, about 1 centimetre may be taken; it should be as fresh as possible and free from obvious cracks. By grinding it on a plate of planed steel or cast iron with a little fine carborundum it is soon rendered flat on one side, and is then transferred to a sheet of plate glass and smoothed with the finest grained emery until all roughness and pits are removed, and the surface is a uniform plane. The rock chip is then washed, and placed on a copper or iron plate which is heated
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which property of a mineral can be determined just by looking at it?
A. luster
B. mass
C. weight
D. hardness
Answer:
|
|
sciq-5457
|
multiple_choice
|
In vascular plants, the sporophyte generation is what?
|
[
"dominant",
"submissive",
"fast",
"evident"
] |
A
|
Relavent Documents:
Document 0:::
In botany, secondary growth is the growth that results from cell division in the cambia or lateral meristems and that causes the stems and roots to thicken, while primary growth is growth that occurs as a result of cell division at the tips of stems and roots, causing them to elongate, and gives rise to primary tissue. Secondary growth occurs in most seed plants, but monocots usually lack secondary growth. If they do have secondary growth, it differs from the typical pattern of other seed plants.
The formation of secondary vascular tissues from the cambium is a characteristic feature of dicotyledons and gymnosperms. In certain monocots, the vascular tissues are also increased after the primary growth is completed but the cambium of these plants is of a different nature. In the living pteridophytes this feature is extremely rare, only occurring in Isoetes.
Lateral meristems
In many vascular plants, secondary growth is the result of the activity of the two lateral meristems, the cork cambium and vascular cambium. Arising from lateral meristems, secondary growth increases the width of the plant root or stem, rather than its length. As long as the lateral meristems continue to produce new cells, the stem or root will continue to grow in diameter. In woody plants, this process produces wood, and shapes the plant into a tree with a thickened trunk.
Because this growth usually ruptures the epidermis of the stem or roots, plants with secondary growth usually also develop a cork cambium. The cork cambium gives rise to thickened cork cells to protect the surface of the plant and reduce water loss. If this is kept up over many years, this process may produce a layer of cork. In the case of the cork oak it will yield harvestable cork.
In nonwoody plants
Secondary growth also occurs in many nonwoody plants, e.g. tomato, potato tuber, carrot taproot and sweet potato tuberous root. A few long-lived leaves also have secondary growth.
Abnormal secondary growth
Abnormal seco
Document 1:::
The Interpolation Theory, also known as the Intercalation Theory or the Antithetic Theory, is a theory that attempts to explain the origin of the alternation of generations in plants. The Interpolation Theory suggests that the sporophyte generation progenated from a haploid, green algal thallus in which repeated mitotic cell divisions of a zygote produced an embryo retained on the thallus and gave rise to the diploid phase (sporophyte). Ensuing evolution caused the sporophyte to become increasingly complex, both organographically and anatomically.
The Interpolation Theory was introduced by Čelakovský (1874) as the Antithetic Theory. Bower (1889) further developed this theory and renamed it the Interpolation Theory. The theory was later supported by Overton (1893), Scott (1896), Strasburger (1897), Williams (1904), and others.
The gradual evolution of an independent, sporophyte phase was viewed by Bower as being closely related to the transition from aquatic to terrestrial plant life on Earth.
Evidence supporting this theory can be found in the life cycle of modern Bryophytes in which the sporophyte is physiologically dependent on the gametophyte. Competing theories include the Transformation theory, which was introduced as the Homologous theory by Čelakovský, and also renamed by Bower.
Document 2:::
The quiescent centre is a group of cells, up to 1,000 in number, in the form of a hemisphere, with the flat face toward the root tip of vascular plants. It is a region in the apical meristem of a root where cell division proceeds very slowly or not at all, but the cells are capable of resuming meristematic activity when the tissue surrounding them is damaged.
Cells of root apical meristems do not all divide at the same rate. Determinations of relative rates of DNA synthesis show that primary roots of Zea, Vicia and Allium have quiescent centres to the meristems, in which the cells divide rarely or never in the course of normal root growth (Clowes, 1958). Such a quiescent centre includes the cells at the apices of the histogens of both stele and cortex. Its presence can be deduced from the anatomy of the apex in Zea (Clowes, 1958), but not in the other species which lack discrete histogens.
History
In 1953, during the course of analysing the organization and function of the root apices, Frederick Albert Lionel Clowes (born 10 September 1921), at the School of Botany (now Department of Plant Sciences), University of Oxford, proposed the term ‘cytogenerative centre’ to denote ‘the region of an apical meristem from which all future cells are derived’. This term had been suggested to him by Mr Harold K. Pusey, a lecturer in embryology at the Department of Zoology and Comparative Anatomy at the same university. The 1953 paper of Clowes reported results of his experiments on Fagus sylvatica and Vicia faba, in which small oblique and wedge-shaped excisions were made at the tip of the primary root, at the most distal level of the root body, near the boundary with the root cap. The results of these experiments were striking and showed that: the root which grew on following the excision was normal at the undamaged meristem side; the nonexcised meristem portion contributed to the regeneration of the excised portion; the regenerated part of the root had abnormal patterning and
Document 3:::
In botany, a plant shoot consists of any plant stem together with its appendages like, leaves and lateral buds, flowering stems, and flower buds. The new growth from seed germination that grows upward is a shoot where leaves will develop. In the spring, perennial plant shoots are the new growth that grows from the ground in herbaceous plants or the new stem or flower growth that grows on woody plants.
In everyday speech, shoots are often synonymous with stems. Stems, which are an integral component of shoots, provide an axis for buds, fruits, and leaves.
Young shoots are often eaten by animals because the fibers in the new growth have not yet completed secondary cell wall development, making the young shoots softer and easier to chew and digest.
As shoots grow and age, the cells develop secondary cell walls that have a hard and tough structure.
Some plants (e.g. bracken) produce toxins that make their shoots inedible or less palatable.
Shoot types of woody plants
Many woody plants have distinct short shoots and long shoots. In some angiosperms, the short shoots, also called spur shoots or fruit spurs, produce the majority of flowers and fruit. A similar pattern occurs in some conifers and in Ginkgo, although the "short shoots" of some genera such as Picea are so small that they can be mistaken for part of the leaf that they have produced.
A related phenomenon is seasonal heterophylly, which involves visibly different leaves from spring growth and later lammas growth. Whereas spring growth mostly comes from buds formed the previous season, and often includes flowers, lammas growth often involves long shoots.
See also
Bud
Crown (botany)
Heteroblasty (botany), an abrupt change in the growth pattern of some plants as they mature
Lateral shoot
Phyllotaxis, the arrangement of leaves along a plant stem
Seedling
Sterigma, the "woody peg" below the leaf of some conifers
Thorn (botany), true thorns, as distinct from spines or prickles, are short shoots
Document 4:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In vascular plants, the sporophyte generation is what?
A. dominant
B. submissive
C. fast
D. evident
Answer:
|
|
ai2_arc-209
|
multiple_choice
|
A male fruit fly is homozygous dominant for gray-body color (G) and is crossed with a female fruit fly that is homozygous recessive for ebony-body color (g). What are the probable phenotypes of the offspring?
|
[
"25% gray, 75% ebony",
"50% gray, 50% ebony",
"100% ebony",
"100% gray"
] |
D
|
Relavent Documents:
Document 0:::
The Punnett square is a square diagram that is used to predict the genotypes of a particular cross or breeding experiment. It is named after Reginald C. Punnett, who devised the approach in 1905. The diagram is used by biologists to determine the probability of an offspring having a particular genotype. The Punnett square is a tabular summary of possible combinations of maternal alleles with paternal alleles. These tables can be used to examine the genotypical outcome probabilities of the offspring of a single trait (allele), or when crossing multiple traits from the parents. The Punnett square is a visual representation of Mendelian inheritance. For multiple traits, using the "forked-line method" is typically much easier than the Punnett square. Phenotypes may be predicted with at least better-than-chance accuracy using a Punnett square, but the phenotype that may appear in the presence of a given genotype can in some instances be influenced by many other factors, as when polygenic inheritance and/or epigenetics are at work.
Zygosity
Zygosity refers to the grade of similarity between the alleles that determine one specific trait in an organism. In its simplest form, a pair of alleles can be either homozygous or heterozygous. Homozygosity, with homo relating to same while zygous pertains to a zygote, is seen when a combination of either two dominant or two recessive alleles code for the same trait. Recessive are always lowercase letters. For example, using 'A' as the representative character for each allele, a homozygous dominant pair's genotype would be depicted as 'AA', while homozygous recessive is shown as 'aa'. Heterozygosity, with hetero associated with different, can only be 'Aa' (the capital letter is always presented first by convention). The phenotype of a homozygous dominant pair is 'A', or dominant, while the opposite is true for homozygous recessive. Heterozygous pairs always have a dominant phenotype. To a lesser degree, hemizygosity and nullizygosit
Document 1:::
Under the law of dominance in genetics, an individual expressing a dominant phenotype could contain either two copies of the dominant allele (homozygous dominant) or one copy of each dominant and recessive allele (heterozygous dominant). By performing a test cross, one can determine whether the individual is heterozygous or homozygous dominant.
In a test cross, the individual in question is bred with another individual that is homozygous for the recessive trait and the offspring of the test cross are examined. Since the homozygous recessive individual can only pass on recessive alleles, the allele the individual in question passes on determines the phenotype of the offspring. Thus, this test yields 2 possible situations:
If any of the offspring produced express the recessive trait, the individual in question is heterozygous for the dominant allele.
If all of the offspring produced express the dominant trait, the individual in question is homozygous for the dominant allele.
History
The first uses of test crosses were in Gregor Mendel’s experiments in plant hybridization. While studying the inheritance of dominant and recessive traits in pea plants, he explains that the “signification” (now termed zygosity) of an individual for a dominant trait is determined by the expression patterns of the following generation.
Rediscovery of Mendel’s work in the early 1900s led to an explosion of experiments employing the principles of test crosses. From 1908-1911, Thomas Hunt Morgan conducted test crosses while determining the inheritance pattern of a white eye-colour mutation in Drosophila. These test cross experiments became hallmarks in the discovery of sex-linked traits.
Applications in model organisms
Test crosses have a variety of applications. Common animal organisms, called model organisms, where test crosses are often used include Caenorhabditis elegans and Drosophila melanogaster. Basic procedures for performing test crosses in these organisms are provided belo
Document 2:::
Non-Mendelian inheritance is any pattern in which traits do not segregate in accordance with Mendel's laws. These laws describe the inheritance of traits linked to single genes on chromosomes in the nucleus. In Mendelian inheritance, each parent contributes one of two possible alleles for a trait. If the genotypes of both parents in a genetic cross are known, Mendel's laws can be used to determine the distribution of phenotypes expected for the population of offspring. There are several situations in which the proportions of phenotypes observed in the progeny do not match the predicted values.
Non-Mendelian inheritance plays a role in several disease which affected the processes.
Types
Incomplete dominants, codominance, multiple alleles, and polygenic traits follow Mendel's laws, display Mendelian inheritance, and are explained as extensions of Mendel's laws.
Incomplete dominance
In cases of intermediate inheritance due to incomplete dominance, the principle of dominance discovered by Mendel does not apply. Nevertheless, the principle of uniformity works, as all offspring in the F1-generation have the same genotype and same phenotype. Mendel's principle of segregation of genes applies too, as in the F2-generation homozygous individuals with the phenotypes of the P-generation appear. Intermediate inheritance was first examined by Carl Correns in Mirabilis jalapa used for further genetic experiments. Antirrhinum majus also shows intermediate inheritance of the pigmentation of the blossoms.
Co-dominance
In cases of co-dominance, the genetic traits of both different alleles of the same gene-locus are clearly expressed in the phenotype. For example, in certain varieties of chicken, the allele for black feathers is co-dominant with the allele for white feathers. Heterozygous chickens have a colour described as "erminette", speckled with black and white feathers appearing separately. Many human genes, including one for a protein that controls cholesterol levels in
Document 3:::
In genetics, dominance is the phenomenon of one variant (allele) of a gene on a chromosome masking or overriding the effect of a different variant of the same gene on the other copy of the chromosome. The first variant is termed dominant and the second is called recessive. This state of having two different variants of the same gene on each chromosome is originally caused by a mutation in one of the genes, either new (de novo) or inherited. The terms autosomal dominant or autosomal recessive are used to describe gene variants on non-sex chromosomes (autosomes) and their associated traits, while those on sex chromosomes (allosomes) are termed X-linked dominant, X-linked recessive or Y-linked; these have an inheritance and presentation pattern that depends on the sex of both the parent and the child (see Sex linkage). Since there is only one copy of the Y chromosome, Y-linked traits cannot be dominant or recessive. Additionally, there are other forms of dominance, such as incomplete dominance, in which a gene variant has a partial effect compared to when it is present on both chromosomes, and co-dominance, in which different variants on each chromosome both show their associated traits.
Dominance is a key concept in Mendelian inheritance and classical genetics. Letters and Punnett squares are used to demonstrate the principles of dominance in teaching, and the use of upper-case letters for dominant alleles and lower-case letters for recessive alleles is a widely followed convention. A classic example of dominance is the inheritance of seed shape in peas. Peas may be round, associated with allele R, or wrinkled, associated with allele r. In this case, three combinations of alleles (genotypes) are possible: RR, Rr, and rr. The RR (homozygous) individuals have round peas, and the rr (homozygous) individuals have wrinkled peas. In Rr (heterozygous) individuals, the R allele masks the presence of the r allele, so these individuals also have round peas. Thus, allele R is d
Document 4:::
The Principle of genetics is a genetics textbook authored by D. Peter Snustad & Michael J. Simmons, an emeritus professor of biology, published by John Wiley & Sons, Inc..
The 6th edition of the book was published on 2012.
Description
The book is sectioned into four parts. The first part, Genetics and the Scientific Method briefly review the History of genetics and the various methods used in genetic study. The second part focus on Mendelian inheritance, the third part deals with Molecular genetics and the last section deals with Quantitative genetics and Evolutionary Genetics.
Review
The book had been reviewed and rated high by several editors and geneticists.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A male fruit fly is homozygous dominant for gray-body color (G) and is crossed with a female fruit fly that is homozygous recessive for ebony-body color (g). What are the probable phenotypes of the offspring?
A. 25% gray, 75% ebony
B. 50% gray, 50% ebony
C. 100% ebony
D. 100% gray
Answer:
|
|
scienceQA-7875
|
multiple_choice
|
How long is a bus route across a small town?
|
[
"3 inches",
"3 yards",
"3 feet",
"3 miles"
] |
D
|
The best estimate for the length of a bus route across a small town is 3 miles.
3 inches, 3 feet, and 3 yards are all too short.
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a bus route across a small town?
A. 3 inches
B. 3 yards
C. 3 feet
D. 3 miles
Answer:
|
sciq-8272
|
multiple_choice
|
What system includes the brain and the spinal cord?
|
[
"digestive",
"lymbic",
"muscular",
"central nervous"
] |
D
|
Relavent Documents:
Document 0:::
The following diagram is provided as an overview of and topical guide to the human nervous system:
Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system.
Evolution of the human nervous system
Evolution of nervous systems
Evolution of human intelligence
Evolution of the human brain
Paleoneurology
Some branches of science that study the human nervous system
Neuroscience
Neurology
Paleoneurology
Central nervous system
The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord.
Spinal cord
Brain
Brain – center of the nervous system.
Outline of the human brain
List of regions of the human brain
Principal regions of the vertebrate brain:
Peripheral nervous system
Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS.
Sensory system
A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception.
List of sensory systems
Sensory neuron
Perception
Visual system
Auditory system
Somatosensory system
Vestibular system
Olfactory system
Taste
Pain
Components of the nervous system
Neuron
I
Document 1:::
The human brain anatomical regions are ordered following standard neuroanatomy hierarchies. Functional, connective, and developmental regions are listed in parentheses where appropriate.
Hindbrain (rhombencephalon)
Myelencephalon
Medulla oblongata
Medullary pyramids
Arcuate nucleus
Olivary body
Inferior olivary nucleus
Rostral ventrolateral medulla
Caudal ventrolateral medulla
Solitary nucleus (Nucleus of the solitary tract)
Respiratory center-Respiratory groups
Dorsal respiratory group
Ventral respiratory group or Apneustic centre
Pre-Bötzinger complex
Botzinger complex
Retrotrapezoid nucleus
Nucleus retrofacialis
Nucleus retroambiguus
Nucleus para-ambiguus
Paramedian reticular nucleus
Gigantocellular reticular nucleus
Parafacial zone
Cuneate nucleus
Gracile nucleus
Perihypoglossal nuclei
Intercalated nucleus
Prepositus nucleus
Sublingual nucleus
Area postrema
Medullary cranial nerve nuclei
Inferior salivatory nucleus
Nucleus ambiguus
Dorsal nucleus of vagus nerve
Hypoglossal nucleus
Chemoreceptor trigger zone
Metencephalon
Pons
Pontine nuclei
Pontine cranial nerve nuclei
Chief or pontine nucleus of the trigeminal nerve sensory nucleus (V)
Motor nucleus for the trigeminal nerve (V)
Abducens nucleus (VI)
Facial nerve nucleus (VII)
Vestibulocochlear nuclei (vestibular nuclei and cochlear nuclei) (VIII)
Superior salivatory nucleus
Pontine tegmentum
Pontine micturition center (Barrington's nucleus)
Locus coeruleus
Pedunculopontine nucleus
Laterodorsal tegmental nucleus
Tegmental pontine reticular nucleus
Nucleus incertus
Parabrachial area
Medial parabrachial nucleus
Lateral parabrachial nucleus
Subparabrachial nucleus (Kölliker-Fuse nucleus)
Pontine respiratory group
Superior olivary complex
Medial superior olive
Lateral superior olive
Medial nucleus of the trapezoid body
Paramedian pontine reticular formation
Parvocellular reticular nucleus
Caudal pontine reticular nucleus
Cerebellar peduncles
Superior cerebellar peduncle
Middle cerebellar peduncle
Inferior
Document 2:::
The following outline is provided as an overview of and topical guide to neuroscience:
Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another.
Branches of neuroscience
Neurophysiology
Neurophysiology is the study of the function (as opposed to structure) of the nervous system.
Brain mapping
Electrophysiology
Extracellular recording
Intracellular recording
Brain stimulation
Electroencephalography
Intermittent rhythmic delta activity
:Category: Neurophysiology
:Category: Neuroendocrinology
:Neuroendocrinology
Neuroanatomy
Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system.
Immunostaining
:Category: Neuroanatomy
Neuropharmacology
Neuropharmacology is the study of how drugs affect cellular function in the nervous system.
Drug
Psychoactive drug
Anaesthetic
Narcotic
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals.
Neuroethology
Developmental neuroscience
Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop.
Aging and memory
Cognitive neuroscience
Cognitive ne
Document 3:::
The motor system is the set of central and peripheral structures in the nervous system that support motor functions, i.e. movement. Peripheral structures may include skeletal muscles and neural connections with muscle tissues. Central structures include cerebral cortex, brainstem, spinal cord, pyramidal system including the upper motor neurons, extrapyramidal system, cerebellum, and the lower motor neurons in the brainstem and the spinal cord.
The motor system is a biological system with close ties to the muscular system and the circulatory system. To achieve motor skill, the motor system must accommodate the working state of the muscles, whether hot or cold, stiff or loose, as well as physiological fatigue.
Pyramidal motor system
The pyramidal motor system, also called the pyramidal tract or the corticospinal tract, start in the motor center of the cerebral cortex. There are upper and lower motor neurons in the corticospinal tract. The motor impulses originate in the giant pyramidal cells or Betz cells of the motor area; i.e., precentral gyrus of cerebral cortex. These are the upper motor neurons (UMN) of the corticospinal tract. The axons of these cells pass in the depth of the cerebral cortex to the corona radiata and then to the internal capsule, passing through the posterior branch of internal capsule and continuing to descend in the midbrain and the medulla oblongata. In the lower part of the medulla oblongata, 90–95% of these fibers decussate (pass to the opposite side) and descend in the white matter of the lateral funiculus of the spinal cord on the opposite side. The remaining 5–10% pass to the same side. Fibers for the extremities (limbs) pass 100% to the opposite side. The fibers of the corticospinal tract terminate at different levels in the anterior horn of the grey matter of the spinal cord. Here, the lower motor neurons (LMN) of the corticospinal cord are located. Peripheral motor nerves carry the motor impulses from the anterior horn to the volun
Document 4:::
The brainstem (or brain stem) is the stalk-like part of the brain that interconnects the cerebrum and diencephalon with the spinal cord. In the human brain, the brainstem is composed of the midbrain, the pons, and the medulla oblongata. The midbrain is continuous with the thalamus of the diencephalon through the tentorial notch.
The brainstem is very small, making up around only 2.6 percent of the brain's total weight. It has the critical roles of regulating cardiac, and respiratory function, helping to control heart rate and breathing rate. It also provides the main motor and sensory nerve supply to the face and neck via the cranial nerves. Ten pairs of cranial nerves come from the brainstem. Other roles include the regulation of the central nervous system and the body's sleep cycle. It is also of prime importance in the conveyance of motor and sensory pathways from the rest of the brain to the body, and from the body back to the brain. These pathways include the corticospinal tract (motor function), the dorsal column-medial lemniscus pathway (fine touch, vibration sensation, and proprioception), and the spinothalamic tract (pain, temperature, itch, and crude touch).
Structure
The parts of the brainstem are the midbrain, the pons, and the medulla oblongata; the diencephalon is sometimes considered part of the brainstem.
The brainstem extends from just above the tentorial notch superiorly to the first cervical vertebra below the foramen magnum inferiorly.
Midbrain
The midbrain is further subdivided into three parts: tectum, tegmentum, and the ventral tegmental area. The tectum forms the ceiling. The tectum comprises the paired structure of the superior and inferior colliculi and is the dorsal covering of the cerebral aqueduct. The inferior colliculus is the principal midbrain nucleus of the auditory pathway and receives input from several peripheral brainstem nuclei, as well as inputs from the auditory cortex. Its inferior brachium (arm-like process) reaches t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What system includes the brain and the spinal cord?
A. digestive
B. lymbic
C. muscular
D. central nervous
Answer:
|
|
sciq-9308
|
multiple_choice
|
The skeletons of humans and horses are examples of what?
|
[
"exoskeletons",
"endoskeletons",
"hydroskeleton",
"pliant skeletons"
] |
B
|
Relavent Documents:
Document 0:::
Work
He is an associate professor of anatomy, Department of Anatomy, Howard University College of Medicine (US). He was among the most cited/influential anatomists in 2019.
Books
Single author or co-author books
DIOGO, R. (2021). Meaning of Life, Human Nature and Delusions - How Tales about Love, Sex, Races, Gods and Progress Affect Us and Earth's Splendor. Springer (New York, US).
MONTERO, R., ADESOMO, A. & R. DIOGO (2021). On viruses, pandemics, and us: a developing story [De virus, pandemias y nosotros: una historia en desarollo]. Independently published, Tucuman, Argentina. 495 pages.
DIOGO, R., J. ZIERMANN, J. MOLNAR, N. SIOMAVA & V. ABDALA (2018). Muscles of Chordates: development, homologies and evolution. Taylor & Francis (Oxford, UK). 650 pages.
DIOGO, R., B. SHEARER, J. M. POTAU, J. F. PASTOR, F. J. DE PAZ, J. ARIAS-MARTORELL, C. TURCOTTE, A. HAMMOND, E. VEREECKE, M. VANHOOF, S. NAUWELAERTS & B. WOOD (2017). Photographic and descriptive musculoskeletal atlas of bonobos - with notes on the weight, attachments, variations, and innervation of the muscles and comparisons with common chimpanzees and humans. Springer (New York, US). 259 pages.
DIOGO, R. (2017). Evolution driven by organismal behavior: a unifying view of life, function, form, mismatches and trends. Springer
Document 1:::
Comparative foot morphology involves comparing the form of distal limb structures of a variety of terrestrial vertebrates. Understanding the role that the foot plays for each type of organism must take account of the differences in body type, foot shape, arrangement of structures, loading conditions and other variables. However, similarities also exist among the feet of many different terrestrial vertebrates. The paw of the dog, the hoof of the horse, the manus (forefoot) and pes (hindfoot) of the elephant, and the foot of the human all share some common features of structure, organization and function. Their foot structures function as the load-transmission platform which is essential to balance, standing and types of locomotion (such as walking, trotting, galloping and running).
The discipline of biomimetics applies the information gained by comparing the foot morphology of a variety of terrestrial vertebrates to human-engineering problems. For instance, it may provide insights that make it possible to alter the foot's load transmission in people who wear an external orthosis because of paralysis from spinal-cord injury, or who use a prosthesis following the diabetes-related amputation of a leg. Such knowledge can be incorporated in technology that improves a person's balance when standing; enables them to walk more efficiently, and to exercise; or otherwise enhances their quality of life by improving their mobility.
Structure
Limb and foot structure of representative terrestrial vertebrates:
Variability in scaling and limb coordination
There is considerable variation in the scale and proportions of body and limb, as well as the nature of loading, during standing and locomotion both among and between quadrupeds and bipeds. The anterior-posterior body mass distribution varies considerably among mammalian quadrupeds, which affects limb loading. When standing, many terrestrial quadrupeds support more of their weight on their forelimbs rather than their hi
Document 2:::
Instruments used in Anatomy dissections are as follows:
Instrument list
Image gallery
Document 3:::
The study of animal locomotion is a branch of biology that investigates and quantifies how animals move.
Kinematics
Kinematics is the study of how objects move, whether they are mechanical or living. In animal locomotion, kinematics is used to describe the motion of the body and limbs of an animal. The goal is ultimately to understand how the movement of individual limbs relates to the overall movement of an animal within its environment. Below highlights the key kinematic parameters used to quantify body and limb movement for different modes of animal locomotion.
Quantifying locomotion
Walking
Legged locomotion is a dominant form of terrestrial locomotion, the movement on land. The motion of limbs is quantified by intralimb and interlimb kinematic parameters. Intralimb kinematic parameters capture movement aspects of an individual limb, whereas, interlimb kinematic parameters characterize the coordination across limbs. Interlimb kinematic parameters are also referred to as gait parameters. The following are key intralimb and interlimb kinematic parameters of walking:
Characterizing swing and stance transitions
The calculation of the above intra- and interlimb kinematics relies on the classification of when the legs of an animal touches and leaves the ground. Stance onset is defined as when a leg first contacts the ground, whereas, swing onset occurs at the time when the leg leaves the ground. Typically, the transition between swing and stance, and vice versa, of a leg is determined by first recording the leg's motion with high-speed videography (see the description of high-speed videography below for more details). From the video recordings of the leg, a marker on the leg (usually placed at the distal tip of the leg) is then tracked manually or in an automated fashion to obtain the position signal of the leg's movement. The position signal associated with each leg is then normalized to that associated with a marker on the body; transforming the leg position
Document 4:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The skeletons of humans and horses are examples of what?
A. exoskeletons
B. endoskeletons
C. hydroskeleton
D. pliant skeletons
Answer:
|
|
sciq-11037
|
multiple_choice
|
What is a continuous flow of electric charges called?
|
[
"electric current",
"electricity",
"magnetic current",
"circuit"
] |
A
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted or ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electr
Document 3:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 4:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a continuous flow of electric charges called?
A. electric current
B. electricity
C. magnetic current
D. circuit
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.